top of page

SEARCH RESULTS

70 items found for ""

  • Big-Oh Notation: Key to Evaluating Algorithm Efficiency

    This article explains the importance of Big-Oh notation in computer science for evaluating and comparing the time and space complexity of algorithms. Alexander S. Ricciardi August 28th, 2024 In computer science, Big-Oh notation is used to describe the time complexity or space complexity of algorithms (Geeks for Geeks, 2024). Mathematically, it defines the upper bound of an algorithm’s growth rate, known as the asymptotic upper bound, and is denoted as f(n) is O(g(n)) or f(n) ∈ O(g(n)) , pronounced f(n) is Big-Oh of g(n) . The term "asymptotic" refers to the behavior of the function as its input size n approaches infinity. In the context of computer science, it describes the worst-case scenario for time complexity or space complexity. For example, an algorithm with time complexity will grow much faster than one with O(n ² ) as the input size increases, with n representing the number of primitive operations. Primitive operations are low-level instructions with a constant execution time, such as assigning a value to a variable, performing an arithmetic operation, comparing two values, or accessing an element in an array by its index. Definition of Big-Oh: Let  f(n)  and  g(n)  be functions mapping positive integers to positive real numbers. We say that  f(n)  ∈ O(g(n))  if there is a real constant  c > 0  and an integer constant  n0 ≥ 1  such that                                                            f(n) ≤ c⋅g(n)   ,    for n ≥ n ₀                (Carrano & Henry, 2018) Others The Graphical representation of the relationship: Figure 1: Big-Oh Time Complexity Note: From Big O notation tutorial – A guide to Big O by Geeks for Geeks (2024) Here are some of the properties of the Big-Oh notation: f(n) = a is     O(1) ,     where a is a constant .  Ex: a = 4, f(5) = 4  and  f(7) = 4   justification: a ≤ a, hence a = a   =>  a = 1a = ca , for c = 1, when n ≥ n ₀ = 1 f(n) = 7n + 2  is    O(n) justification: 7n + 2   ≤ (7+2)n = cn ,  for c = 9, when n ≥ n ₀ = 1 f(n) = 6n ⁴ + 2n ³ + n ² - 3n is    O(n) justification: 6n ⁴ + 2n ³ + n ² - 3n   ≤  (6+2+1-3) n ⁴  = cn ⁴ , for c = 6, when n ≥ n ₀ = 1 f(n) = 7n ³   + 5n log n + 9n is    O(n ³   ) justification: 7n ³   + 5n log n + 9n  ≤  (7+5+9) n ³   = cn ³   , for c = 6, when n ≥ n ₀ = 1    f(n) = 5 log n + 3 is    O(log n)      justification: 5n log n + 3  ≤  (5+3) log n = c log n , for c = 8 and n ≥ 2, when n ≥ n ₀ = 2. Note that log n is 0 for n = 1 . f(n) = 3  ⁿ⁺² is    O(3 ⁿ )    justification: 3   ⁿ⁺²  ≤  = 3 ²  +  3 ⁿ , hence 3 ⁿ⁺²  =  9  ∙  3 ⁿ  = c 3 ⁿ , for c = 9 when n ₀ = 1    f(n) = 7n + 75 log n   is    O(n) justification: 7n + 75 log n  ≤  (7+75)n  = cn, for c = 82, when n ≥ n ₀ = 1 (Carrano & Henry, 2018) Note that the Big-Oh notation eliminates constant terms and lower factors. The major function types used to analyze time complexity are Logarithm, Linear, Quadratic, and Constant Time Complexities, below is the definition of these functions: Constant Time Complexity - O(1) O(g(n)) where g(n) = c For some fixed constant c, such as c = 2 , c = 33 , or c = 300 . The time complexity remains constant, regardless of the input size. The algorithm takes the same amount of time to complete, no matter how large n This is the most efficient time complexity. Logarithmic Complexity - O(log n) O(g(n)) where g(n) = log n Note that in computer science log n =  log ₂ n And x = log ₂ n  iff   2 ˣ = n The time complexity grows logarithmically with input size, meaning it increases slowly. Linear Complexity - O(n) O(n) where g(n) = n Given an input value n , the linear function g assigns the value n itself. The time complexity grows linearly with input size, meaning the time taken increases directly in proportion to the input size. Quadratic Complexity - O(n ² ) O(n ² ) where g(n) = n ² The time complexity grows quadratically with input size, meaning the time taken increases by the square of the input size. As shown above, the time or space complexity directly relates to the Big-Oh bounding function g(n) type. For example, the Big-Oh quadratic function O(n ² ) has a greater growth rate than the logarithmic function O(log n) . In other words, O(n ² ) has greater complexity than O(log n) making the algorithm associated with O(n ² ) worse than the one related to O(log n). See Figure 2 for different Big-Oh functions' growth rates. Figure 2 Big-Oh - O(g(n)) Growth Rates The table below illustrates various O(g(n)) related to common computer science algorithms: Table 1 Big-Oh Notation Summary Note: From Big O Notation by Cowan (n.d.). Big-Oh Pros and Cons The Big-Oh notation has both pros and cons. One of the main advantages is that it provides a clear way to express algorithm efficiency by eliminating constant factors and lower-order terms, allowing the focus to be on how the algorithm's performance scales as the input size grows, rather than being tied to specific computing system specifications or the type of compiler used. In other words, it is platform independent and programming language agnostic. Although both will have a significant impact on the final running time of the algorithms; however, that impact will most likely be a constant multiple (Educative, n.d.). The major disadvantage of Big-Oh notation is that it loses important information by eliminating constant factors and lower-order terms, and by ignoring computing system specifications and the type of compiler used. Additionally, it only focuses on the worst-case scenario, the upper bound, leaving out potential insights into how the algorithm might perform under typical conditions or with smaller input sizes. In other words, the Big-Oh notation gives the worst big picture of algorithm performance and ignores the rest of the nuances that could be crucial for better understanding the algorithm's behavior. The Importance of Big-Oh Big-Oh analysis, or any other algorithm analysis, can help uncover performance bottlenecks, identify opportunities for optimization, and influence software design by selecting more appropriate algorithms for specific tasks. Making Big-Oh and any other algorithm analysis crucial for developing efficient, and reliable software systems. One real-world application where Big-Oh analysis is essential for efficiency is in e-commerce platforms like Amazon, where millions of products need to be searched quickly and accurately. An inefficient search algorithm can directly impact user experience by slowing down searches, frustrating potential customers, and leading to abandoned carts and missed sales. Thus, optimizing or selecting the right search algorithms using Big-Oh or another algorithm analysis is paramount for the efficiency of the application and user experience. In conclusion, Big-Oh notation is a powerful tool for evaluating and comparing the efficiency of algorithms. While it has limitations in ignoring constant factors and average-case scenarios, its ability to highlight worst-case performance and guide algorithm selection makes it indispensable in developing efficient and reliable software systems, particularly in high-demand applications like e-commerce platforms. References: Carrano, F. M., & Henry, T. M. (2018, January 31). Algorithms: Algorithm Analysis. Data structures and abstractions with Java (5th Edition). Pearson. Cowan, D. (n.d.). Big O Notation. Down Cowan Blog. https://www.donkcowan.com/blog/2013/5/11/big-o-notation Educative (n.d.). Advantages and disadvantages of the Big-O notation . Educative. https://www.educative.io/courses/algorithmic-problem-solving-preparing-for-a-coding-interview/advantages-and-disadvantages-of-the-big-o-notation Geeks for Geeks (March 29, 2024). Big O notation tutorial – A guide to Big O analysis.   https://www.geeksforgeeks.org/analysis-algorithms-big-o-analysis

  • The Black Box Concept in Graphics Programming and Deep Learning

    This article describes the black box concept in engineering, focusing on its application in graphics programming and deep learning. Alexander S. Ricciardi August 27th, 2024 The concept of a black box refers to a system in engineering that is characterized solely by its inputs and outputs, and we may not know its internal workings (Angel & Shreiner, 2020). In graphic programming, this concept can be defined as developers interacting with the graphics pipeline, through a graphical API such as WebGl and OpenGl by declaring specific inputs, such as vertex data and shaders, and receiving the rendered output, without needing to understand or interact with the processes handled internally by the operating system or the GPU, see Figure 1. Figure 1 Graphics Pipeline as a Black Box Note: From 2.3 Webgl Application Programming Interface, Interactive Computer Graphics (8th edition) , Figure 2.3 Graphic System as a Black Box by Angel and Shreiner (2020).                                                               Notice that the flow chart from Figure 1 has three elements, application program, graphics system, and input or output devices. The flow chart can be described as follows: Function calls from application program to graphics system. Output from graphics system to input or output devices. Input from input or output devices to graphics system. Data from graphics system to application program. This abstraction allows graphic programmers to focus on developing and optimizing visual outputs and user experiences rather than getting bogged down in the complexities of the graphics system architecture and processing details. In other words, application interfaces such as WebGL, OpenGL, and DirectX allow graphic programmers to create sophisticated graphics without worrying about the low-level operations managed by the operating system and the functionalities of the GPU, enabling them to concentrate on the creative aspects of their work. Another domain where the black box concept is used is in Artificial Intelligence (AI), particularly in Deep Learning (DL). Just as in graphics programming with WebGL and OpenGL, DL developers interact with APIs using Python libraries such TensorFlow and PyTorch , without needing to understand or interact with the processes handled internally by the operating system or the GPU. Furthermore, AI systems often function as black boxes themselves making decisions based on complex algorithms and vast amounts of data that are opaque to users. This is known as the “black box problem” in AI, where the inputs and outputs are clear, but the pathway that the AI took (thinking-reasoning) from input to output is not easily understood or visible, even to the developers who created the systems (Rawashdeh, 2023). The black box concept is a powerful abstraction tool as shown by its application in graphic programming and DL development, enabling programmers and developers to be more efficient and to concentrate on the creative and engineering aspects of their work. References: Angel, E., & Shreiner, D. (2020).  Interactive computer graphics . 8th edition. Pearson Education, Inc. ISBN: 9780135258262 Rawashdeh, S. (2023, March 6) AI's mysterious ‘black box’ problem, explained. University of Michigan-Dearborn. https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained

  • Process Synchronization in Operating Systems: Key Challenges and Solutions

    This article discusses methods and techniques of process synchronization in operating systems, focusing on classic problems like the Bounded-Buffer, Readers-Writers, and Dining-Philosophers, and their solutions using semaphores, deadlock prevention, and mutual exclusion. Alexander S. Ricciardi March 27th, 2024 In the context of an operating system (OS), process synchronization can be defined as a set of methods and techniques used to coordinate the concurrent execution of multiple processes or threads. In a multi-process environment, which can also be referred to as "multiprocessing" or "parallel computing", process synchronization plays a crucial role in ensuring that shared data is accessed and modified correctly by addressing several problems that may arise in multi-process environments. Classic problems of process synchronization are the Bounded-Buffer, Readers–Writers, and Dining-Philosophers problems. Solutions to these problems can be developed using tools such as semaphores, binary semaphores,  Message passing, and monitors (Silberschatz et al., 2018, p. 314). The Bounded-Buffer Problem  The Bounded-Buffer Problem is a classic synchronization problem that involves coordinating the operations of producers and consumers accessing a fixed-size buffer. A Bounded-Buffer can lead to Race Condition.  A Race Condition is “a situation in which multiple threads or processes read and write a shared data item, and the final result depends on the relative timing of their execution” (Stallings, 2018, Chapter 5). Mutual exclusion must be enforced to ensure that only one process can access the shared buffer at a given time. The following is a Bounded-Buffer producer and consumer pseudocode. // producer while (true) { /* produce item v */ while ((in + 1) % n == out) /* do nothing */; b[in] = v; in = (in + 1) % n; } // consumer while (true) { while (in == out) /* do nothing */; w = b[out]; out = (out + 1) % n; /* consume item w */; } (Stallings, 2018, Chapter 5.4) The problem Description: The fixed-size buffer can hold a limited number of items. Multiple producers may add items to the buffer concurrently. Multiple consumers may remove items from the buffer concurrently. Producers must wait if the buffer is full. Consumers must wait if the buffer is empty. A solution to the Bounded-Buffer Problem is the implementation of semaphore, which is a mutual exclusion method. A semaphore (also called a counting semaphore or a general semaphore) is an integer value used for signaling among processes, and can only be accessed through two standard atomic operations: semWait() and semSignal(). The letter 's' is generally used to denote a semaphore, below is an example of semaphore pseudocode. struct semaphore { int count; queueType queue; }; void semWait(semaphore s) { s.count--; if (s.count < 0) { /* place this process in s.queue */; /* block this process */; } } void semSignal(semaphore s) { s.count++; if (s.count <= 0) { /* remove a process P from s.queue */; /* place process P on ready list */; } } (Stallings, 2018, Figure 5.6) Not that only three operations may be performed on a semaphore, all of which are atomic: initialize, decrement, and increment. The decrement operation may result in the blocking of a process, and the increment operation may result in the unblocking of a process. Furthermore, a binary semaphore is a semaphore that takes on only the values 0 and 1, and a mutex is “similar to a binary semaphore. A key difference between the two is that the process that locks the mutex (sets the value to 0) must be the one to unlock it (sets the value to 1)" (Stallings, 2018, Chapter 5.4). Below is an example of binary semaphore pseudocode: struct semaphore { int count; queueType queue; }; void semWait(semaphore s) { s.count--; if (s.count < 0) { /* place this process in s.queue */; /* block this process */; } } void semSignal(semaphore s) { s.count++; if (s.count <= 0) { /* remove a process P from s.queue */; /* place process P on ready list */; } } (Stallings, 2018, Figure 5.7) Note that: A binary semaphore may be initialized to zero or one. The semWaitB(semaphore s) function checks if the value is zero, then the process executing the semWaitB() is blocked. If the value is one, then the value is changed to zero and the process continues execution. The semSignalB(semaphore s)  function checks if the queue is empty (s is equal to zero) on the semaphore, if so, the semaphore is set to one, else a process blocked by a semWaitB() is unblocked (removed from the queue) and placed on the ready list. The following pseudocode shows a possible solution to the Bounded-Buffer Problem using semaphores: Readers–Writers Problem /* program boundedbuffer */ const int sizeofbuffer = /* buffer size */; semaphore s = 1, n = 0, e = sizeofbuffer; void producer() {     while (true) {        produce(); // produce item         semWait(e);         semWait(s);         append(); // add item to 'the end of the list'         semSignal(s);         semSignal(n);     } } void consumer() {     while (true) {         semWait(n);         semWait(s);         take(); // take item from 'the list'         semSignal(s);         semSignal(e);         consume(); // consume item     } } void main() {     parbegin(producer, consumer); } (Stallings, 2018, Figure 5.13) The Readers–Writers Problem can be described as multiple readers and writers concurrently assessing a resource (e.g., a file or database) and potentially affecting the data integrity. The condition that needs to be met to ensure data integrity are: Any number of readers may simultaneously read the file. Only one writer at a time may write to the file. If a writer is writing to the file, no reader may read it. Note that readers are processes that are not required to exclude one another, and writers are processes that are required to exclude all other processes, readers and writers alike. (Stallings, 2018, Chapter 5.7) A solution to the Readers–Writers Problem is the implementation of semaphore or message passing. Message passing is a mechanism to allow processes or threads to communicate and synchronize their actions. In message passing, communication occurs by explicitly sending a message from a sender to a receiver. The sender encapsulates the data or information to be transmitted into a message and sends it to the receiver. The receiver then receives the message and extracts the information or data from it. Dining-Philosophers problem The dining-Philosophers problem describes a scenario where, for example, five philosophers share a meal at a round table. Every philosopher has a plate and one fork to their right and one fork to their left. They need to use two forks to eat, and it is a total of five forks on the table.  The problem arises when all the philosophers decide to eat at the same time. The Dining-Philosophers Problem is a classic synchronization problem, it is a metaphor for the problems of deadlock and starvation, which can occur when multiple processes attempt to access multiple resources simultaneously. Starvation is the situation in which a process or thread waits indefinitely within a semaphore. (Silberschatz et al., 2018, p. G-32) Deadlock is the state in which two processes or threads are stuck waiting for an event that can only be caused by one of the processes or threads. (Silberschatz et al., 2018, p. G-9). Two main methods exist to prevent or avoid deadlocks. The first one is the deadlock avoidance approach, which grants access to a resource if it cannot result in a deadlock. The second one is the deadlock prevention approach which involves changing the rules so that processes will not make requests that could result in deadlock. Not that for a deadlock to occur, each of the four conditions listed below must hold: Mutual Exclusion: Only one process at a time can use a resource. Hold and Wait: A process that holds at least one resource and is waiting to acquire additional resources held by other processes. No Preemption: A resource can be released only voluntarily by the process holding it after that process has completed its task. Circular Wait: A set of waiting processes. In other words, a deadlock will not occur by ensuring that at least one of the conditions does not exist. Deadlock prevention prevents at least one of the four conditions of deadlock. On the other hand, deadlock avoidance allows the four conditions, but it allows only three of the four conditions to exist concurrently. Finally, one of the solutions for Dining-Philosophers Problem is using a monitor-based solution. A monitor is a programming language construct that allows to put a lock on objects. The main characteristics of a monitor are the following: The local data variables are accessible only by the monitor’s procedures and not by any external procedure. A process enters the monitor by invoking one of its procedures. Only one process may be executing in the monitor at a time; any other processes that have invoked the monitor are blocked, waiting for the monitor to become available. (Stallings, 2018, Chapter 5.5). In conclusion, process synchronization is a complex subject and it is essential in operating systems to coordinate the concurrent execution of multiple processes or threads. It addresses various problems such as the Bounded-Buffer Problem, Readers-Writers Problem, and Dining-Philosophers Problem. These problems highlight the challenges of managing shared resources, ensuring mutual exclusion, avoiding race condition and deadlock, and preventing starvation. References: Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating system concepts [PDF]. Wiley. Retrieved from: https://os.ecci.ucr.ac.cr/slides/Abraham-Silberschatz-Operating-System-Concepts-10th-2018.pdfLinks to an external site. Stallings, W. (2018).  Operating Systems: Internals and design principles . Pearson

  • Exception Handling in Python

    This article explores the various techniques used to handle exceptions in Python, including try-except blocks, custom exceptions, and advanced features like exception chaining and enrichment. Alexander S. Ricciardi March 26, 2024 Python provides a robust exception-handling framework that not only allows programmers to implement code that prevents crashes but also offers feedback and maintains application stability. Moreover, it enables developers to manage errors gracefully using constructs like try-except blocks, custom exceptions, and more. • The Try-Except Block In the try-except block, the code that may raise an exception is placed in the try-block, and the except-block specifies the actions to take if an exception occurs (Python Software Foundation, n.d.). For example: try: result = 1 / 0 except ZeroDivisionError: print("Cannot divide by zero.") To catch multiple exceptions in one try-except block, we can use a try-block with several except-blocks to generate specific responses for each exception type. Or, we can use a tuple to catch multiple exceptions with a single exception expression. For example: # One try block and several except blocks try: result = 1 / 'a' except ZeroDivisionError: print("Cannot divide by zero.") except TypeError: print("Type error occurred.") # One try block and one except tuple block try: # some operation result = 1 / 'a' except (ZeroDivisionError, TypeError) as e: print(f"Error occurred: {e}") - The Else Clause The else clause, is placed after the try-except blocks and runs if the try block does not raise an exception. For example: try: result = 1 / 2 except ZeroDivisionError: print(“Cannot divide by zero.”) else: print(“Division successful.”) • The Finally Clause The finally clause is always placed after the try-block or any except-block. It contains code that runs no matter what, typically for cleaning up resources like files or network connections, even if an exception was raised. For example: try: result = 1 / ‘a’ except ZeroDivisionError: print(“Cannot divide by zero.”) except TypeError: print(“Type error occurred.”) else: print(“Division successful.”) finally: print(“Goodbye, world!”) - The Raise Statement Raising exceptions: the raise clause raises exceptions by forcing an exception to occur, usually to indicate that a certain condition has not been met. For example: if ‘a’ > 5: raise ValueError(“A must not exceed 5”) - Exception Chaining You can chain exceptions with the clause raise. This is useful for adding context to an original error. For Example try: open(‘myfile.txt’) except FileNotFoundError as e: raise RuntimeError(“Failed to open file”) from e - Custom exceptions You can define your own exception classes by inheriting from the Exception class or any other built-in exception class (Mitchell, 2022). For example: class My_custom_ (Exception): pass try: raise MyCustomError(“An error occurred”) except MyCustomError as e: print(e) - Enriching exceptions you can add information or context to an exception by using the add_note() method to ‘append’ custom messages or notes to the exception object aka e. For example: def divide_numbers(a, b): try: result = a / b except ZeroDivisionError as e: e.add_note(“Cannot divide by zero”) e.add_note(“Please provide a non-zero divisor”) raise try: num1 = 10 num2 = 0 divide_numbers(num1, num2) except ZeroDivisionError as e: print(“An error occurred:”) print(str(e)) Handling exceptions is important for several reasons: Prevents program crashes: Unhandled exceptions can cause the program to crash, leading to data loss and a poor user experience. Provides meaningful error messages: By handling exceptions, you can provide users with informative error messages that help them understand what went wrong and how to fix it. Allows for graceful degradation: Exception handling enables the program to continue running even if an error occurs. A simple program error handling example: ##------------------------------------------- # Pseudocode: # 1. Define a custom exception class called CustomError. # 2. Create a function that raises the CustomError exception # based on a condition. # 3. Use a try-except block to handle the CustomError exception. #------------------------------------------- # Program Inputs: # - num: An integer value. #------------------------------------------- # Program Outputs: # - Custom exception message when the CustomError exception is raised. # - Success message when no exception is raised. #------------------------------------------- class CustomError(Exception): ''' A custom exception class. ''' pass def check_number(num): ''' Checks if the given number is positive. :param int: num, an integer value. :raises CustomError: If the number is not positive.        :Return: None ''' if num <= 0: raise CustomError("Number must be positive.") print("Number is valid.") #--- Main Program def main() -> None: ''' The main function that demonstrates the usage of custom exceptions. ''' try: check_number(5) # Valid number check_number(-2) # Raises CustomError except CustomError as e: print(f"Error: {str(e)}") #--- Execute the program if __name__ == "__main__": main() >>> Number is valid. Error: Number must be positive. To summarize, Python provides a comprehensive exception-handling framework that allows programs to handle unexpected situations without failing abruptly. By utilizing constructs such as try-except blocks, custom exceptions, and advanced features like exception chaining and enrichment, developers can ensure that their programs are resilient, user-friendly, and capable of handling unexpected scenarios gracefully. References: Mitchell R (2022, June 13). Custom exceptions. Python Essential Training [VIDEO]. LinkedIn Learning. https://www.linkedin.com/learning/python-essential-training-14898805/custom-exceptions?autoSkip=true&resume=false&u=2245842 Python Software Foundation. (n.d.). 8. Errors and Exceptions. Python . python.org .

  • Understanding Polymorphism in Python

    The article provides an in-depth explanation of polymorphism in Python, highlighting its role in object-oriented programming. Alexander S. Ricciardi March 26th, 2024 Polymorphism is a Greek word that means many-shape or many-forms. Polymorphism is a fundamental concept in object-oriented programming (OOP). Python is polymorphic, meaning that in Python objects have the ability to take many forms. In simple words, polymorphism allows us to perform the same action in many different ways. (Vishal, 2021) Furthermore, in Python, everything is an object/a class.  “Guido van Rossum has designed the language according to the principle "first-class everything". He wrote: "One of my goals for Python was to make it so that all objects were "first class." By this, I meant that I wanted all objects that could be named in the language (e.g., integers, strings, functions, classes, modules, methods, and so on) to have equal status.” (Klein, 2022, 1. Object Oriented Programming) To understand Polymorphism, it is important to understand the ”Duck Typing” concept."If it looks like a duck and quacks like a duck, then it probably is a duck." In programming, this means that the suitability of an object is determined by the presence of certain methods and properties, rather than the actual type of the object.In Python, Duck Typing is the concept where the ‘suitability’ of an object is determined by the presence of certain methods or attributes, rather than the actual type of the object.In other words, Polymorphism in Python means that a single operator, function, or class method can have multiple forms/behaviors depending on the context. Operator polymorphism Or operator overloading allows an operator like + to perform different operations based on the operand types. (Jergenson, 2022) For example: Two integers int_1 = 10 int_2 = 45 print(str(int_1 + int_2)) >>> 55       Two strings str_1 = “10” str_2 = “45” print(str_1 + str_2) >>> 1045 2. Function Polymorphism Built-in functions like len() can act on multiple data types (e.g. strings, lists) and provide the length measured appropriately for each type. For example: str_1 = "polymorphic" print(str(len(str_1))) >>> 11 my_lst = [1, 2, 3, 4, 5] print(str(len(my_lst)) >>> 5 3. Class method polymorphism Allows subclasses to override methods inherited from the parent class. For example: # Parent class class Animal: def make_sound(self): pass # Child Class class Dog(Animal): def make_sound(self): return "Woof!" # Child Class class Cat(Animal): def make_sound(self): return "Meow!" def animal_sound(animal): print(animal.make_sound()) dog = Dog() cat = Cat() animal_sound(dog) # Output: Woof! animal_sound(cat) # Output: Meow! 4. Independent classes can also define methods with the same name that behave differently. For example: def enter_obj(obj): return obj.action() # Independent class class Animal: def __init__(self, food): self.food = food # same name as the Circle method different functionality def action(self): print(f"eats {self.food}") # Independent class class Circle: def __init__(self, radius): self.radius = radius # same name as the Animal method different functionality def action(self): return 3.14 * (self.radius ** 2) cow = Animal("grass") circ = Circle(7)enter_obj(cow)print(str(enter_obj(circ))) >>> eats grass 153.86 In conclusion, polymorphism is a powerful feature of Python. It allows objects to take on multiple forms and behave differently based on the context. Python's duck typing enables polymorphism by focusing on the presence of certain methods or attributes rather than the actual type of the object. References: Jergenson, C. (2022, May 31). What is polymorphism in python? . Educative. https://www.educative.io/blog/what-is-polymorphism-python Klein, B. (2022, February 1). object oriented programming / OPP . python-course. https://python-course.eu/oop/object-oriented-programming.php Vishal. (2021, October 21). Polymorphism in python. PYnative. https://pynative.com/python-polymorphism/

  • Basic Loops in Python

    This article explains how to use 'for' and 'while' statements to create loops in Python, each serving different purposes for repetitive tasks. The article also explores additional control statements such as 'break,' 'continue,' 'pass,' and 'else' to manage loop execution. Alexander S. Ricciardi March 7th, 2024 In Python, the major statements required to create loops are ' for ' and ' while '.The for statement is mostly used to iterate over iterable objects (such as a string, tuple or list). Additionally, like other coding languages (Python Software Foundation (a), n.d.). The ‘ while ’ loop, on the other hand, is used for repeated execution as long as an expression is true. (Python Software Foundation (b), n.d.). In other words, both the ‘ for ’ and the ‘ while ’ loops are algorithmic, meaning they perform repetitive tasks until a condition is met or a condition remains true. To be more specific, the ‘ for ’ iterates over sequences executing a set of instructions until a condition is met, for example, until the end of the sequence is reached. In comparison, the ‘ while ’ will execute a set of instructions as long a condition is true. The loops complement each other and when nested within each other they can be a powerful tool for solving complex problems. This is the main reason Python has more than one loop statement. The ‘for’ statement The ‘ for ’ statement goes through each item in the sequence or iterable, one by one, and executes the block of code for each element. The flow chart below depicts the algorithmic nature of the ‘ for ’ loop. Figure 1 The ‘ for ’ loop Note: 4.3 For Loops in Python, by Colorado State University Global (2024a) A scenario of iterating over a sequence using a ‘ for ’ loop could be similar to the following: user_ids = [101, 102, 103, 104] for user_id in user_ids: print (user_id) The ‘while’ statement The ‘ while ’ statement, before each iteration, evaluates the condition; if the condition is true, the loop's body is executed. If the condition becomes false, the loop stops. The flow chart below depicts the algorithmic nature of the ‘ while ’ loop. Figure 2 The ‘ while ’ loop Note: from 4.2 While Loops in Python, by Colorado State University Global (2024b) A scenario of iterating using a ‘ while ’ loop as long a condition is true could be similar to the following: coffee = 0 homework_num = 100 while coffee < 100: coffee += 1 print(f"Drinking coffee number {coffee} ...") if coffee < 100: print(f"Doing homework number {homework_num } …") homework_num -= 1 if homework_num == 0: break else: print("Rest in peace!") The ‘ break ’ will exit the loop. The ‘ break ’, ‘ continue ’, ‘ pass ’, and ‘ else ’ statements can be used in conjunction with loops to control their execution. The ‘break ’ statement is used within loops to exit the loop. The ‘ continue ’ statement allows the loop to skip the rest of its code block and proceed directly to the next iteration. The ‘ pass ’ statement acts as a placeholder and does nothing really. It is often used by programmers as a placeholder to bypass blocks of code that are under construction or not yet implemented. The ‘ else ’ statement executes a block of code after the loop completes normally. In other words, the code within the ‘ else ’ block runs only if the loop is not terminated by a ‘ break ’ statement. For example, the ‘ while ’ loop example could be rewritten as follows: coffee = 0 homework_num = 100 while coffee < 100: coffee += 1 print(f"Drinking coffee number {coffee} ...") if coffee < 100: print(f"Doing homework number {homework_num } …") homework_num -= 1 if homework_num == 0: break else: print("Rest in peace!") Here the ‘ else ’ statement is part of the ‘ while ’ loop, the code within the ‘ else ’ would be executed if the loop is not terminated by the ‘ break ’ statement. In this case, the code within the ‘ else ’ statement will run. In conclusion, Python's 'for' and 'while' loops, along with control statements like 'break,' 'continue,' 'pass,' and 'else,' allow for control and flexibility in managing repetitive tasks in programming and creating effective code. References: Colorado State University Global (2024a) 4.3 For Loops in Python. Module 4: Python. Repetition. In ITS320: Basic Programming. Colorado State University Global (2024b) 4.2 While Loops in Python. Module 4: Python. Repetition. In ITS320: Basic Programming. Python Software Foundation (a). (n.d.). 4. More Control Flow Tools. The Python Tutorial . python.org . https://docs.python.org/3/tutorial/controlflow.html#index-0Links to an external site. Python Software Foundation (b). (n.d.). 8. Compound statements. The Python Language Reference . python.org . https://docs.python.org/3/tutorial/controlflow.html#index-0

  • Python Data Types: A Quick Guide

    This article explains how to use Python's data types effectively to create scalable and maintainable applications. Alexander S. Ricciardi February 21th, 2024 Python offers a rich variety of data types that are fundamental to writing effective and efficient code. Understanding these data types is crucial for any developer, as it allows for proper data storage, manipulation, and retrieval. In this guide, we’ll explore common Python data types, their applications, and strategies for determining which data types to use in different scenarios.   A quick explanation of Python data types. First, Python offers a vast array of data types. The Python documentation provides detailed descriptions of each data type, and you can find the list at the following link:  Data Types . “Python also provides some built-in data types, in particular,  dict ,  list ,  set , and  frozenset ,  tuple . The  str  class is used to hold Unicode strings, and the  bytes  and  bytearray  classes are used to hold binary data” (Python Software Foundation (a), n.d., Data Type).  Built-in data types in Python are fundamental data structures that come standard with Python; you don't need to import any external library to use them. The table below shows Python's common data types. Table-1 Common Data Types Note: from  Programming in Python 3,  by Bailey, 2016.   Strategy for Determining Data Types To determine the data types needed for an application, it is crucial to analyze the data that needs to be collected and understand the application's functionality requirements. In general, this equates to these four key steps: Identifying the Data: identifying what types of data the application will collect and handle, such as textual information and numerical data. Understanding Data Operations: which operations will be performed on the data, such as sorting, searching, or complex manipulations, to ensure the chosen data types can support these functionalities. Structuring Data Relations: how different pieces of data relate to each other and deciding on the appropriate structures (e.g., nested dictionaries or lists) to represent these relationships efficiently. Planning for Scalability and Maintenance: future expansions or modifications to the application and selecting data types and structures that allow for modification, updates, and scalability. For this specific application, this translates to the following steps: Note that the information provided does not explicitly state whether the data needs to be manipulated (sorted or modified). However, for the application to be useful and functional, the data needs to be manipulated to some extent. Based on the information provided, the application functionality requirements are as follows: Storing Personal Information: storing basic personal information for each family member, such as names and birth dates. Address Management: manage and store current and possibly multiple addresses for each family member. Relationship Tracking: tracking and representing the relationships between different family members (e.g., parent-child, spouses, siblings). Data Manipulation: functionalities for editing, sorting, and updating the stored information, including personal details, addresses, and family relationships. Based on the information provided, the data that needs to be collected is as follows: Names: this includes names and family members' names are text data Birth dates: birth dates can be text data, numbers data, or a mix of both. Address: addresses can be complex, potentially requiring storage of multiple addresses per family member with components like street, city, state, and zip code. It is a mix of numbers and text data. Relationship: relationships between family members (e.g., parent-child, spouses, siblings) is text data. Four data elements and the corresponding data types Taking into account the application functionality requirements and data information the following are the four data elements and the corresponding data types. Names: the string data type str. This allows us to easily store and manipulate individual names. I will use a tuple to separate the first name and last name, name = (‘first_name’, ‘last_name’). Tuples are great in this case because they are immutable, meaning once a tuple is created, it cannot be altered ensuring that the integrity of first and last names is preserved. Additionally,  they are indexed meaning that they can be searched by index. For example, a list name tuple can be searched by last or first name. Furthermore, a tuple takes less space in memory than a dictionary or a list. Birth Dates: they could technically be stored as strings, integers, lists, or dictionaries, however utilizing the datetime.date object from Python's datetime module has significant advantages such as easy date manipulations and functionality. For example, calculating ages, or sorting members by their birth dates. In most cases storing birth dates, requires converting input strings into datetime.date objects. Note that datetime is a class. Additionally, in Python data types (floats, str, int, list, tuple, set, ...) are instances of the Python `object`. In other words, they are objects. A  datetime.date object utilizes the following data type : Year : An integer representing the year, e.g., 2024. Month : An integer representing the month, from 1 (January) to 12 (December). Day : An integer representing the day of the month, from 1 to 31, depending on the month and year.  For example: Note: the method date.fromisoformat() coverts strings into datetime.date () object with integer arguments. from datetime import date >>> date.fromisoformat('2019-12-04') datetime.date (2019, 12, 4) >>> date.fromisoformat('20191204') datetime.date (2019, 12, 4) >>> date.fromisoformat('2021-W01-1') datetime.date (2021, 1, 4) (Python Software Foundation (b), n.d., datetime - Basic date and time types ) Address: addresses have multiple components such as street, city, state, and zip code. I would use a dictionary data type dict . The dictionary key-value pair items structure is great for storing, modifying, and accessing the various parts of an address. Relationship: relationships between family members, such as parent-child, spouses, and siblings. I would use a dictionary data type dict with embedded List and tuple data types. In this structure, the keys represent the types of relationships, and the values are lists of names or identifiers referencing other family members. This would allow for easy storing, modifying, and accessing relationships data. Below is an example of what the Python code could be like:    from datetime import date user_123 = {     "name": ("John", "Doe"),  # Using tuple for the name     "birth_date": date(1974, 6, 5),  # Using datetime for birth dates     "address": {  # Using a dictionary for the address         "street": "123 My Street",         "city": "Mytown",         "state": "Mystate",         "zip_code": "12345"     },     "relationships": {  # Using a dictionary with embedded lists and tuples         "spouse": ("Jane", "Doe"),         "children": [("George", "Doe"), ("Laura", "Doe")],         "parents": [("Paul", "Doe"), ("Lucy", "Doe")],     } }   To create well-structured and maintainable applications in Python, it is essential to choose the right data types. To ensure your code is both efficient and scalable, it’s crucial to understand the differences between Python’s built-in data types—such as strings, tuples, dictionaries, and datetime objects—and to implement them effectively   References: Bailey, M. (2016, August). Chapter 3: Types,  Programming in Python 3.  Zyante Inc. Python Software Foundation (a). (n.d.). Data Type.  Python . python.org .  https://docs.python.org/3/library/datatypes.htmlLinks to an external site. Python Software Foundation (b). (n.d.). datetime - Basic date and time types   Python . python.org . https://docs.python.org/3/library/datetime.html

  • Key Criteria for Developing Python Functions

    This article discusses key criteria for developing Python functions, focusing on code reusability, complexity management, and testability. Alexander S. Ricciardi March 10th, 2024 In Python, you can either use a predefined function/method or write a user-defined function/method. In this discussion, provide at least three criteria that would be used to develop an appropriate method of your choice and the rationale behind selecting these criteria. Then give an example of your method declaration and return type. The three criteria that I consider when developing my functions/methods are as follows: Code reusability : When a block of code needs to be repeated multiple times in a program, the block of code is a good candidate to be modularized into a reusable function. This promotes DRY (Don't Repeat Yourself) code, a principal in software development whose goal is to reduce repetition (Schafer, 2015). DRY. The rationale behind Reusable functions is to make the code more modular, readable, and maintainable. Changes only need to be made in one place. Task complexity : If a task is made of many steps (code blocks), wrapping the task's steps in well-named functions makes the task more modular and hides complexity. The rationale is to reduce the tasks' readable complexity by hiding the details. In other words, the goal is to separate the "what" from the "how" making the code more understandable, readable, and maintainable. This is very useful for tasks with many steps and very complex logic. Code testability : Encapsuling code blocks in functions with clear inputs and outputs makes testing large programs easier to test. The rationale is that isolating code into functions with clear inputs and outputs enhances the program's robustness and maintainability by facilitating easier testing of the codebase. Considering these three criteria is key to creating a professional and high-quality codebase which, in the long run, saves time and reduces frustration not only for you but also for your ‘coding teammates.’This is particularly true when it comes to refactoring the code. “Refactoring, or code refactoring in full, is a systematic process of amending previously built source code, without introducing new functionalities or altering the fundamental workings of the subject software.” (Slingerland, 2023) Some of the actions that software development teams commonly take during refactoring include: Reducing the size of the code Restructuring confusing code into simpler code Cleaning up code to make it tidier  Removing redundant, unused code and comments  Doing away with unnecessary repetitions Combining similar code Creating reusable code Breaking up long functions into simpler, manageable bits  (Slingerland, 2023) Example: Here is a code example without functions and a code example with functions.The following program filters prime numbers from a list, and then checks if the square of the prime number are less than 50. No functions: num_lst = [2, 3, 4, 5, 6, 7, 8, 9, 10] squared_primes_less_than_50 = [] for n in num_lst: # Check if n is prime if n > 1: is_prime = True for i in range(2, int(n**0.5) + 1): if n % i == 0: is_prime = False break if is_prime: # Calculate square square = n**2 # Check if square is less than 50 if square < 50: squared_primes_less_than_50.append(square) print(squared_primes_less_than_50) With functions: def is_prime(n: int) -> bool: """ Check if a number is prime. :param n: Integer to check for primality :return: Boolean indicating if the number is prime """ if n <= 1: return False for i in range(2, int(n**0.5) + 1): if n % i == 0: return False return True def filter_primes(num_lst: list[int]) -> list[int]: """ Filter prime numbers from a list. :param num_lst: List of integers to filter :return: List of prime numbers from the input list """ return [n for n in num_lst if is_prime(n)] def square_numbers(num_lst: list[int]) -> list[int]: """ Square each number in a list. :param num_lst: List of integers to square :return: List of squared integers """ return [n**2 for n in num_lst] def squares_less_than(num_lst: list[int], threshold: int) -> list[int]: """ Filter numbers less than a specified threshold from a list. :param num_lst: List of integers to filter :param threshold: Threshold value for filtering :return: List of numbers from the input list that are less than the threshold """ return [n for n in num_lst if n < threshold] # --- Execute the program def main() -> None: """ Main function to execute the program logic. :param None: :return: None """ numbers = [2, 3, 4, 5, 6, 7, 8, 9, 10] primes = filter_primes(numbers) squared_primes = square_numbers(primes) result = squares_less_than(squared_primes, 50) print(result) # --- Execute the program if __name__ == "__main__": main() Please see the functions' docstrings and hints for information about the return types. In summary, by applying the principles of reusability, complexity management, and testability, you can create a robust and scalable codebase that is easier to refactor and extend. This not only improves code quality but also makes maintenance and collaboration more efficient. References: Schafer, C. (2015, June 16). Programming terms: Dry (don’t repeat yourself) . YouTube. https://www.youtube.com/watch?v=IGH4-ZhfVDk&t=7s Slingerland, C. (2023, December 5). What is refactoring? 5 techniques you can use to improve your software. CloudZero. https://www.cloudzero.com/blog/refactoring-techniques/

  • Short-Circuit in Python's Compound Conditional Expressions

    This article explains how Python's short-circuit evaluation in compound conditional expressions enhances efficiency by stopping the evaluation as soon as the outcome is determined. Alexander S. Ricciardi February 25th, 2024 To understand the concept of short-circuiting in compound conditional expressions in Python, it is important to be familiar with the logical operators  and  and  or . The table below summarizes the logical outcomes for these operators. Table 1 The ‘and’ and ‘or’ Operators A B A  AND B A OR B False False False False False True False True True False False True True True True True Note:  From Module 3: Understanding Python decision control structure, ITS320: Basic Programming, by Colorado State University Global, 2024. Modified 2024, February 25. In Python, short-circuiting in the context of compounded conditional expressions is when the interpreter stops evaluating a logical expression as soon as the logical expression outcome is determined (Severance, 2016). In other words, when in the process of reading a logical expression, if the interpreter can determine the outcome of the expression before reaching the end of it, it would stop reading the expression.Note: the interpreter reads from left to right. This occurs when using the operators  and  and  or  in an expression. This is called a short-circuit boolean evaluation. (Hrehirchuk et al, 2024) For example: When using the and operator: a = 1 b = 2 c = 3 d = 4 if a < b and a > c and a < d:               #--- Do something Here the short-circuiting happens when the Python interpreter stops evaluating the logical expression  a < b and a > c and a < d  at step  a > c  because  a > c  returns  False . Consequently, the expression  a < b and a > c and a < d  is  False , it does not matter if the expression  a < d  returns  False  or  True . When using the  or  operator: a = 1 b = 2 c = 3 d = 4  if a > b or a < c or a > d:             #--- Do something Here the short-circuiting happens when the Python interpreter stops evaluating the logical expression  a > b or a < c or a > d  at step  a < c  because a < c returns  True . Consequently, the expression  a > b or a < c or a > d  is  True , it does not matter if the expression  a > d  returns  False  or  True . When using a combination of  and  and  or  logical operators, the and operator has precedent over the or operator. This is similar to the arithmetic operator precedence between '+' and '*', where '*' has precedence over '+'.The table below depicts the logical operators' precedence utilizing parentheses. Table 2 Precedence of Logical Operators A or B and C means A or (B and C) A and B or C and D means (A and B) or (C and D) A and B and C or D means ((A and B) and C) or D !A and B or C means ((!A) and B) and C Note : from Chapter 40 Boolean Expressions and Short-Circuit Operators - Precedence of Logical Operators, by Kjell, n.d. Modified 2024, February 25. In conclusion, short-circuiting occurs when the logical operators and and or determine when the Python interpreter stops evaluating an expression once the outcome is clear. For example, when the operator and is used it stops the evaluation at the first False and when the operator or is used it stops at the first True , this enhances efficiency. Therefore, understanding short-circuit evaluation in Python is crucial for writing efficient and effective conditional expressions.

  • Programming Fundamentals: The Power of Modular Development

    This article explores the essential components of programming, highlighting the importance of modular development in creating scalable, maintainable software applications. Alexander S. Ricciardi February 11th, 2024 What are the principal components involved in developing a program?     The principal components involved in developing an application are planning and analysis, design, implementation, testing, and maintenance. The process is cyclical, meaning that the components are treated as repeatable steps during the lifetime of the applications. This was not always the case, in the past most applications were sold on Compact Disks (CDs). Software maintenance was only accessible to businesses that could afford to have a team of programmers capable of maintaining (upgrading/updating) their software. This application development model was not accessible to individual users or smaller businesses. Today, most individual user applications are developed using a programming development cycle model. The following list outlines six components or steps involved in a programming development cycles model.   Analyze the problem or need: Understand the issue or need and decide on a programmatic solution. Design the program (logic): Use tools like flowcharts to visualize the program's flow. Code the program: Write the source code using a programming language. Debug the program (test): Identify and fix errors or "bugs" in the code. Formalize the solution: Ensure the program is ready for release and formalize documentation for understanding and future maintenance. Release and maintain the program. (Nguyen, 2019)   The word “cycles” is pluralized in the expression “programming development cycles” because the development steps can be repeated by section. For example, the steps ‘Code the program’ and ‘Debug the program’ can form a cyclic section of the development, meaning that after you debug the program you may need to recode the program and then debug it again.   The diagram below shows the cyclical nature of a programming development cycle model. Figure 1 Programming Development Cyclical Nature Note: From  Programming Development Cycles , by Nguyen, 2019.   Describe the benefits of breaking up a program into small modules and provide an example of an application that makes the most of one of these benefits.   Breaking up a program into small modules has several benefits such as readability, manageability, easier testing, reusability, and maintenance of code. Block-structured languages structurally implement what can be considered a low level of modulation for readability and functionality. For example, c and c++ use brackets {} to groups (modularized) and Python utilizes indentation for the same purpose (Klein, 2022). Object-oriented programming languages take it a step further by implementing classes that allow the creation of object instances that encapsulate both code and data allowing modulation of a program. I store my program classes in different files and directories. Additionally, importing libraries, for example, in c++ and Python is a form of modular programming.   Modular programming is:   “Modular programming is a general programming concept where developers separate program functions into independent pieces. These pieces then act like building blocks, with each block containing all the necessary parts to execute one aspect of functionality.” (Macdonald, 2023)   “Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute only one aspect of the desired functionality.” (Busbee & Braunschweig, 2018)   The benefits of breaking up a program into small modules or modular programming can be listed as follows:   Readability: Modular code is easier to read and understand because it's divided into logical sections, each performing a distinct function. Manageability: Smaller, self-contained modules are easier to manage because changes in one module are less likely to impact others. Easier Testing: Modules can be tested independently, making it simpler to isolate and resolve defects. Reusability: Functions or classes defined in one module can be reused in other parts of the program or in future projects, saving development time. Maintenance: Updating a module for new requirements or fixing bugs is more straightforward when the application is modularized, enhancing long-term code maintenance. (Busbee, 2013)   In general, it is good practice to modularize your program small or large. Applications that benefit from modular programming are large-scale web applications. Large-scale web applications are complex systems. They need to handle a large and increasing number of users, higher loads of traffic, and an exponentially growing data pool. An example of a large-scale web application is an online retail platform, which needs to manage vast amounts of user and product data, process transactions securely, and scale dynamically (Struk, 2023). For large-scale web applications, modular programming combined with microservices architecture (a form of modulation that breaks up a program into separate services)  is crucial for manageability, efficiency, scalability, and maintainability.    A popular framework for web applications is React , it utilizes the concept of components as modules.   “In React, you develop your applications by creating reusable components that you can think of as independent Lego blocks. These components are individual pieces of a final interface, which, when assembled, form the application’s entire user interface.” (Herbert, 2023)   “React's component-based architecture allows you to create modular and reusable code that can help you scale your web application as your business grows. This makes React a great choice for developing large-scale applications that require maintainability, scalability, and flexibility.” (Hutsulyak, 2023) The modular development approach helps developers create software that is maintainable, scalable, and efficient. By breaking down complex applications into smaller, independent modules, developers can create systems that are easier to manage, test, and update. Whether you're working on a small project or a large-scale web application, utilizing modular programming will empower you to build more resilient and adaptable software solutions. References: Busbee, K. L. (2013, January 10). Programming Fundamentals - A Modular Structured Approach using C++. Internet Archive.   Busbee, K. L., & Braunschweig, D. (2018, December 15). Modular Programming. Programming Fundamentals. https://press.rebus.community/programmingfundamentals/chapter/modular-programming/   Herbert, D. (2023, November 13). What is react.js? uses, examples, & more. HubSpot Blog. https://blog.hubspot.com/website/react-js

bottom of page