top of page

SEARCH RESULTS

70 items found for ""

  • Wi-Fi Security: Risks, Protocols, and Best Practices

    This article explores the technology of Wi-Fi, detailing its evolution, relationship to other wireless technologies, and inherent security vulnerabilities. It provides an overview of common Wi-Fi security risks, solutions, and various authentication protocols, emphasizing the importance of proactive security measures for safe and reliable wireless network usage. Alexander S. Ricciardi January 9, 2025 In recent decades, wireless technology has infiltrated every aspect of daily life, becoming an integral part of how most of us communicate, access information, work, and interact with the world. Wireless technology refers to any technology that enables communication or data transfer without the use of wires or cables (Hasons, 2024). This post explores the widely used wireless technology Wi-Fi, how it relates to other wireless technologies, the challenges and security concerns associated with it, and how to address them. Type of Wireless Technologies and How They Relate to Wi-Fi Wireless technologies enable telecommunication, that is the transfer of information between two or more devices without the use of physical media such as wires and optical fiber. The five main types of wireless technology commonly used to transfer data today are cellular networks for mobile communication, Bluetooth for short-range device connection, satellite communication for broad coverage, and Wi-Fi for local area networking. All these technologies use radio waves instead of other electromagnetic signals such as infrared light used by remote controls or visible light used for Li-Fi as radio waves allow for longer ranges and better penetration through obstacles such as walls.  Wi-Fi is the most used telecommunication technology within Wireless Local Area Networks (WLAN). It uses specific radio frequencies (2.4GHz, 5GHz, and 6GHz) and has protocols optimized for creating local area networks. In other words, it can provide internet access locally (within a limited range) making it ideal for homes, offices, and public spaces. Wi-Fi has evolved and continues to evolve by increasing data rates and bandwidth. The table below shows the different Wi-Fi standards and how they have evolved.  Table 1 Wi-Fi Standards Note: data from “The Evolution of Wi-Fi Networks: From IEEE 802.11 to Wi-Fi 6E” By Links (2022). On a side note, Li-Fi is a relatively new technology, it is “bidirectional wireless system that transmits data via LED or infrared light. It was first unveiled in 2011” (Iberdrola, n.d., p. 1). Li-Fi is cheaper, faster, and has a larger data volume capacity than Wi-Fi. However, it cannot communicate through walls or other opaque materials as Wi-Fi does, because it relies on visible or infrared light instead of radio waves.   Wi-Fi Security Concerns Wireless networks have many advantages such as eliminating the need for physical cables (low installation cost), allowing greater flexibility in device placement, and providing mobility to users. However, unlike wired networks, where access is physically restricted by cables, in wireless networks electromagnetic signals such as radio waves can be intercepted by anyone within range using a compatible wireless device. Networks using Wi-Fi technology are practically vulnerable to unauthorized access (rogue access points), data interception (eavesdropping), Denial of Service (DoS) attacks, and malware. The table below illustrates the most common Wi-Fi risks and solutions Table 2 Wi-Fi Security Risk and Solutions Note: data from “Exploring Common Wi-Fi attacks: A deep dive into wireless network vulnerabilities” by ITU (2024) and “Introduction to Wireless Networks” by Grigorik (2016).  Wi-Fi Security Protocols As shown in Table 2, Wi-Fi networks are vulnerable to attacks if not secured properly. The IEEE 802.11 Wi-Fi standard provides various types of authentication protocols. Thus, it is essential to understand the difference between them to choose the right one that meets the security needs of specific wireless networks. For example, the WPA3 protocol offers the strongest security but requires new and expensive hardware. WPA2 provides AES encryption, it is the most recommended access protocol for users as it provides compatibility between older and newer security devices. The table below provides a description of the different main authentication protocols associated with the Wi-Fi standard, as well as their strengths, and weaknesses. Table 3  Wi-Fi Security Access Protocols Note: data from various sources (Freda, 2022; Raphaely, n.d.; Basan, 2024; AscentOptics, 2024). Not only is it important to understand the difference between the different Wi-Fi security protocols, it is also essential to keep informed about the latest security vulnerabilities and updates. For instance, at the beginning of 2024, a new Wi-Fi vulnerability was discovered by researchers (Migliano). The CVE-2023-52424 vulnerability affects all operating systems, it is categorized as a Service Set Identifier (SSID) Confusion attack, where Wi-Fi clients can be tricked to connect to an untrusted network. The table below describes what type of network and authentication are vulnerable to CVE-2023-52424. Table 4 Types of Wi-Fi Networks Vulnerable to SSID Confusion Attacks Note: from “New WiFi vulnerability explained: Protecting against SSID confusion attacks” by Migliano (2024). To defend against this new vulnerability the 802.11 Wi-Fi standard needs to be updated to incorporate the SSID as part of the 4-way handshake when connecting to protected networks, and the beacon protection needs to be improved to allow a client to store a reference beacon containing the network's SSID to verify its authenticity during the 4-way handshake (Lakshmanan, 2024). In conclusion, since wireless technology, more specifically the Wi-Fi standard, has become an indispensable part of modern life, it is important to understand the associated security risks, their solutions, and the authentication protocols used to secure Wi-Fi and other wireless networks. Ultimately, prioritizing security by proactively addressing wireless network vulnerabilities is essential for the safe use of this indispensable technology. References: AscentOptics. (2024, January 9). WEP, WPA, WPA2, WPA3: Classifying and comparing wireless protocols. AscentOptics Blog. https://ascentoptics.com/blog/wep-wpa-wpa2-wpa3-classifying-and-comparing-wireless-protocols/ Basan, M. (2024, April 29). Wireless Network Security: WEP, WPA, WPA2 & WPA3 Explained . eSecurity Planet. https://www.esecurityplanet.com/trends/the-best-security-for-wireless-networks/ Freda, A. (2022, February 14). WEP, WPA, or WPA2 — Which Wi-Fi security protocol is best? . AVG. https://www.avg.com/en/signal/wep-wpa-or-wpa2 Grigorik, I. (2016, April 27). Introduction to wireless networks. High Performance Browser Networking. https://hpbn.co/introduction-to-wireless-networks/ Hasons (2024, February 26). Wireless Technology – What is Wireless Technology? Hasons. https://hasonss.com/blogs/wireless-technology/ Iberdrola (n.d.). What is LiFi technology? LiFi, the internet at the speed of light . Iberdrola Group. https://www.iberdrola.com/innovation/lifi-technology ITU (2024, February 7). Exploring Common Wi-Fi attacks: A deep dive into wireless network vulnerabilities.  ITU Online IT Training. https://www.ituonline.com/blogs/common-wi-fi-attacks/ Lakshmanan, R. (2024, May 16). New Wi-Fi vulnerability enables network eavesdropping via downgrade attacks. The Hacker News. https://thehackernews.com/2024/05/new-wi-fi-vulnerability-enabling.html Links, C. (2022, May 19). The Evolution of Wi-Fi networks: from IEEE 802.11 to Wi-Fi 6E. Wevolver. https://www.wevolver.com/article/the-evolution-of-wi-fi-networks-from-ieee-80211-to-wi-fi-6e Migliano, S. (2024, May 14). New WiFi vulnerability explained: Protecting against SSID confusion attacks. https://www.top10vpn.com/research/wifi-vulnerability-ssid/?utm_source Raphaely, E. (n.d.). A complete guide to wireless (Wi-Fi) security. SecureW2. https://www.securew2.com/blog/complete-guide-wi-fi-security

  • IP Address Allocation in a Small Class C Network

    This article provides a step-by-step guide to allocating IPv4 addresses within a small business network using a Class C subnet. It covers key concepts like subnet masks, networks, and broadcast addresses. It also provides an example of an IP addressing scheme for devices like servers, printers, and VoIP phones. Alexander S. Ricciardi January 1, 2025 Regardless of the size of a network, proper IP address allocation is crucial for the efficiency and security of a network. This article examines a scenario where a network administrator needs to assign IPv4 addresses within a small Class C network (192.168.1.0/24 with a subnet mask of 255.255.255.0, providing 254 usable addresses). The network comprises 100 nodes, including four servers (a domain controller, a replica, a data server, and a web server), a network printer, and a VoIP phone system. Overview of the network It is important before assigning IP addresses to devices to understand the purpose of each device and a plan for IP address allocation. This section analyses the provided Class C network scheme. Subnet Mask: 255.255.255.0 (or /24) – This is the classful subnet mask for a Class C network. Table 1 IP Address class Note: From “Lesson 5: IPv4 and IPv6 addresses. CompTIA Network+ Pearson N10-007 (Course & Labs) ” by uCertify (2019). Network Address: 192.168.1.0 – This is a private C class network see table below. Table 2 Private IP Networks Note: From “Lesson 5: IPv4 and IPv6 addresses. CompTIA Network+ Pearson N10-007 (Course & Labs) ” by uCertify (2019).  Host IP Addresses Total Possible IP Addresses: The /24 subnet mask represents the first 24 bits of the IP address that are used to identify the network, the last 8 bits (32 total bits - 24 network bits = 8 host bits) are used to identify the host IP addresses. With 8 bits, you can have 28 (2 to the power of 8) or 256 IP address combinations. Therefore, there are 256 possible IP addresses within the 192.168.1.0/24 network. Reserved Addresses (Network and Broadcast): Network Address: the first possible address is reserved for the network address. It has all the bits set to 0. For this example, it is 192.168.1.0.  Broadcast Address: The last possible address is reserved for the directed broadcast address. It address has all bits set to 1. For this example, it is 192.168.1.255. Host Usable IP Addresses: Since the network address and the broadcast address are reserved, they cannot be assigned to hosts. Therefore, the number of usable host IP addresses is 256 - 2 = 254. To calculate the total possible host IP addresses the following formula is used: 2 ʰ -2 where h is the number of host bits in a subnet mask for example is: 2⁸-2 = 256 – 2 = 254 See the table below. Table 3 Usable Host IP Addresses Note: From “Lesson 5: IPv4 and IPv6 addresses. CompTIA Network+ Pearson N10-007 (Course & Labs) ” by uCertify (2019).  Number of Devices 100 nodes (4 servers, 1 printer, 1 VoIP system - assuming multiple phones, and the rest are workstations). In summary: Network: 192.168.1.0/24 Total IP Addresses: 256 Network Address: 192.168.1.0 Broadcast Address: 192.168.1.255 Usable IP Addresses: 254 (from 192.168.1.1 to 192.168.1.254) Specific Device IP Address Requirements This section examines possible solutions to specific devices' IP Address requirements and proposed IP addressing schemes, by considering the device types and their roles within the network. The network needs to support servers, printers, a VoIP system, workstations, and mobile devices. Servers typically require static IP addresses; this allows consistent access, enhances security, and simplifies management. It is especially important for the domain controller, which needs a fixed address for clients to connect reliably.    Printers can use dynamic IP addresses, however assigning a static IP address to the printer that has mobile printing capability ensures that mobile devices can consistently locate and connect to it.    VoIP Phone Systems require a set range of IP addresses to function correctly, for security reasons; it is also important that the allocated range is large enough to accommodate the number of phones and potential scaling. Workstations, laptops, and mobile devices typically use dynamic IP addresses. This allows devices to automatically receive IP addresses from a DHCP server, providing flexibility to the user and reducing network administrative overhead. Devises IP address Scheme Now that the device IP address requirements have been defined, the device IP address scheme can be set by diving the network range into logical pools: Network infrastructure devices (e.g., Router, Firewall, Default Gateway) 192.168.1.1 – 192.168.1.9 (9 devices) These addresses are reserved for network infrastructure devices such as the default gateway (for example, 192.168.1.1) or a firewall device. Servers (Domain Controller, Replica, Data Server, Web Server) 192.168.1.10 – 192.168.1.14 (5 devices) Static IP addresses are assigned to the servers. Example: - Domain Controller: 192.168.1.10 - Replica DC: 192.168.1.11 - Data Server: 192.168.1.12 - Web Server: 192.168.1.13 Network Printer(s) 192.168.1.15 – 192.168.1.17 (3 devices) Static IP addresses to the printer and future printers. VoIP phones 192.168.1.18 – 192.168.1.39 (22 devices) This pool is used for VoIP phones. DHCP Pool (Dynamic Addresses for Workstations, Laptops, Mobile Phones, etc.) 192.168.1.40 – 192.168.1.200 (161 devices) It allows devices to receive addresses automatically, and the range can be adjusted in the DHCP server’s configuration. This covers the 100 hosts and adds 61 extra addresses for mobile devices and scaling. Reserved for scaling 192.168.1.201 – 192.168.1.254 (54 devices) Keep a block of addresses for future needs and scaling. To summarize, this post explored the allocation of IP addresses within a small Class C network:  192.168.1.0/24 network. By understanding the scheme of the network, including its subnet mask, total and usable IP addresses, and the specific requirements of different devices, an IP addressing scheme was developed to accommodate the needs of the different types of devices and meet the requirements set by the given scenario. References: uCertify. (2019). Lesson 5: IPv4 and IPv6 addresses. CompTIA Network+ Pearson N10-007 (Course & Labs) [Computer software]. uCertify LLC. ISBN: 9781616910327

  • TCP/IP Open Port Scanning: Open Ports, Hidden Dangers

    The article explores TCP/IP port scanning by considering its potential benefits and drawbacks. It emphasizes the importance of balancing proactive vulnerability identification with the potential impact on network performance and security systems. Alexander S. Ricciardi December 19, 2024 TCP/IP ports are used by devices and applications on a network to communicate. In other words, they act as a gateway for devices, programs, and networks to broadcast information and communicate (Kolaric, 2024). However, as communication gateways, their open nature makes them vulnerable to exploitation by malicious actors. One potential solution to mitigate this risk is to regularly scan open ports. This post examines the efficiency and feasibility of scanning open TCP/IP ports, that is whether it helps secure them or creates more problems than it solves. Before exploring port scanning, it is important to understand why TCP/IP ports are vulnerable in the first place. For example: Unsecured (legacy) services, protocols, or ports such as 21 (FTP), 23 (Telnet), 110 (POP3), 143 (IMAP), and 161 (SNMPv1 and SNMPv2) are vulnerable because the protocols using these ports do not provide authentication, integrity, or confidentiality (cjs6891, n.d.). Attackers often target default ports, that is ports used by services with default configurations such as databases like SQL Server and MySQL (ports 1433, 1434, and 3306), as well as services such as SSH (port 22), and HTTP (port 80). These ports are targeted because they are well-known and widely used as default ports for databases, services, and some applications. Even secure protocols such as HTTPS (port 443) are vulnerable to attacks like cross-site scripting (XSS) and SQL injections, which exploit weaknesses that are part of web applications (Techa, 2024). Attackers use various approaches to exploit open TCP/IP ports' vulnerabilities. Methods such as credential brute-forcing (repeatedly trying to login with different login credentials), spoofing and credential sniffing (impersonating legitimate users to intercept and steal sensitive information), exploiting application vulnerabilities (listening on open ports to gain control of systems or steal data), and denial-of-service (DOS) (flooding open ports with traffic, overwhelming a system) (Murphy, 2023). With so many potential threats targeting open TCP/IP ports, a potential proactive solution to mitigate this risk is TCP/IP port scanning. TCP/IP port scanning is a technique, a software, that runs a port scan on a network or server to identify which ports are open and listening (receiving information) as well as revealing the location or presence of network security devices, like firewalls (Paloalto, n.d.). This technique is also called fingerprinting. Fingerprinting can help identify open ports and the services running on them, revealing potential suspicious activities. For example, multiple unauthorized access or login attempts and the presence of unknown services. Port scanning can be used to verify that security devices are in place, functioning correctly, and that only authorized ports are open. It can also map active hosts and those hosts to their IP addresses revealing unknown IP addresses and active hosts, as well as detecting unauthorized changes to the network configuration. Therefore, port scanning can be used as a proactive security tool for identifying and addressing security vulnerabilities within a network. However, if used too aggressively port scanning can interfere with another security system such as Intrusion Detection Systems (IDS) triggering unwanted security alerts. It can also impact network performance by consuming network bandwidth and resources. Thus, it is essential to balance the use of port scanning with the appropriate frequency and scope of the scans. It is also essential to use the appropriate tools and techniques that are best suited to the specific characteristics of the network. Below is a table describing some of those tools and techniques. Table 1 Tools and Techniques for Port Scanning Note: Data from NMAP (n.d) and Kost (2024). To summarize, TCP/IP port scanning is a proactive technique for assessing and improving network security. However, if used too aggressively port scanning can interfere with security systems and impact network performance. Therefore, it is essential to define the appropriate scope and frequency of scans and to select the right tools and techniques that are best suited to the specific characteristics of the network. When used properly port scanning can play a role in mitigating the risks associated with open TCP/IP ports and strengthen the overall security of a network. References: cjs6891 (n.d.). Cisco CCNA Cyber Ops SECFND 210-250, Section 3: Understanding Common TCP/IP Attacks . E17_blog. GitHub. https://cjs6891.github.io/el7_blog/texts/cisco-ccna-cyber-ops-secfnd-3/#:~:text=Examples%20of%20insecure%20services%2C%20protocols,authenticity%2C%20integrity%2C%20and%20confidentiality . Kolaric, D. (2024, June 5). Identifying secure and unsecured ports and how to secure them . All About Security. https://www.all-about-security.de/identifying-secure-and-unsecured-ports-and-how-to-secure-them/ Kost, E. (2024, November 18). Top 5 Free Open Port Check Tools in 2024. UpGuard. https://www.upguard.com/blog/best-open-port-scanners Murphy, D. (2023, December 11). Open Port Vulnerabilities: How to Secure Open Ports . Lepide Blog. https://www.lepide.com/blog/how-to-secure-open-ports-from-vulnerabilities/ NMAP Org. (n.d). TCP SYN (Stealth) Scan (-sS) | Nmap Network Scannin g. NMAP Org. https://nmap.org/book/synscan.html#:~:text=TCP%20SYN%20(Stealth)%20Scan%20(,it%20never%20completes%20TCP%20connections Paloalto (n.d.). What is a Port Scan? Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-is-a-port-scan#:~:text=Running%20a%20port%20scan%20on,revealing%20the%20presence%20of%20security&text=Port%20scanning%20plays%20a%20crucial,can%20signal%20potential%20security%20vulnerabilities Schrader, D. (2024, September 23). Identifying common open port vulnerabilities in your network . Netwrix. https://blog.netwrix.com/open-ports-vulnerability-list Techa, M. (2024, November 11). Understanding common ports used in networks for TCP and UDP usage . Netwrix. https://blog.netwrix.com/common-ports

  • UML Diagrams: A Guide for Software Engineers

    This article provides an overview of Unified Modeling Language (UML) diagrams, their types, and their applications in software engineering. It covers their types, applications, and use in software engineering and explains how UML diagrams help software developers visualize, design, and document complex systems. Alexander S. Ricciardi December 17, 2024 In Software Engendering (SE), Unified Modeling Language (UML) diagrams are essential tools for communicating ideas and understanding systems. UML diagrams are integral to SE because they present a suite of modeling artifacts that are a globally accepted standard for SE (Unhelkar, 2018 a). In other words, they provide a standard visual language for representing the structure, behavior, and interactions within software systems (Tim, 2024). Furthermore, these diagrams are organized within distinct modeling spaces (MOPS, MOSS, and MOAS) that guide their application to specific aspects of the software development process (Unhelkar, 2018b). They help software developer teams to plan, design, and communicate with stakeholders. UML version 2.5 defines 14 types of diagrams, use case, activity, package, class, profile, sequence, communication, interaction overview, object, state machine, composite structure, component, deployment, and timing diagrams. The table below provides a brief description of each diagram.  Table 1 UML Diagrams Note: Data from “Chapter 2 - Review of 14 Unified Modeling Language Diagrams” by Unhelkar (2018 a). Modify. As shown in Table 1, there are two main categories of UML diagrams, structural and behavioral.A structural diagram illustrates the way a system is organized/structured whereas a behavioral diagram illustrates the flow of activity, actions, or interactions (behaviors) within the system (Unhelkar, 2018 a).A diagram can represent either a static or dynamic view of a system. Static diagrams illustrate the structure of a system at a specific point in time, whereas dynamic diagrams capture the system's changes over a period of time or during execution, emphasizing its time-dependent aspects. The diagram below categorizes the diagrams into structural and behavioral groups and adds the interaction subgroup to the behavioral category. An interaction diagram shows interactions between the components of a system, and it can also depict how a system as a whole interacts with external entities (Dwivedi, 2019). Figure 1 UML Diagrams Diagram Note: From “Unified Modeling Language (UML) Diagrams” by GeeksforGeeks (2024). Each UML diagram plays a role in modeling different areas of a software system. These areas can be divided into three categories called modeling spaces with each diagram responsible for modeling within those spaces (Unhelkar, 2018 b). The three modeling spaces are: Model of Process Space (MOPS) or Problem Space: It models “what” the business problem or user is. MOPS’ goal is to understand and model the business requirements. Model of Solution Space (MOSS): It models “how” the solution of the problem will be implemented. MOSS’ goal is to represent the system’s structure, behavior, and interactions using diagrams like class diagrams, sequence diagrams, and object diagrams. Model of Architectural Space (MOAS): It models the “big picture” and the overall technical environment. MOAS’ goals are to define architectural constraints, manage the project, and ensure quality. These spaces are crucial for organizing and structuring the software development process, without them the use of UML can degenerate into incorrect or excessive modeling (Unhelkar, 2018 b).The figure below illustrates how the three models relate to each other, to the different actors, and to the software development process. Figure 2 Modeling Spaces Note: From “Software Projects and Modeling Spaces: Package Diagrams. Software Engineering with UML,” Figure 3.3, by Unhelkar (2018 b). Each UML diagram has varying levels of importance within the different modeling spaces. The table below maps each diagram to each modeling space, assigning up to 5 ‘*’ to show the diagram's level of importance within a mapped space, with 5 ‘*’ being the highest level of importance (Utmost Importance). Table 2 Importance of each UML Diagram to Respective Modeling Space (with 5 * for Utmost Importance to That Particular Space) Note: From “Software Projects and Modeling Spaces: Package Diagrams. Software Engineering with UML,” Table 3.2, by Unhelkar (2018 b).  For example, in the Model of Solution Space (MOSS), the three most important diagrams are the class, sequence, and composite structure diagrams. Where each diagram plays a different role in modeling the solution: Class diagrams illustrate detailed designs and programming constructs. They can also model relational database tables. Class diagrams define the system's structure, showing the classes, their attributes, methods, and the relationships between them. Sequence diagrams illustrate detailed models of interactions within the system. They depict the dynamic exchange of messages between objects over time. Composite structure diagrams illustrate the internal structure of a classifier (like a class or component), that is the functionality of a group of objects and components, including their interfaces and realizations. It is only a UML diagram used to model the physical components of a system or business.  (Unhelkar, 2018 b) For example, let’s explore the composite structure diagrams of a simple item class. See the figure below.  Figure 3 Item Class Composite Structure Diagram Note that the item class provides an interface for the website component of the project, allowing the website to access and display item information such as name, price, and availability. The figure below provides a basic illustration of components that can be found in a UML composite structure diagram.  Figure 4 Basic Composite Structure Diagram Components Note: From “Composite Structure Diagram” by Udacity (2015) To summarize, UML diagrams are essential for communicating ideas and understanding systems. The 14 UML diagram types are categorized as either structural or behavioral and can represent either a static or dynamic view of a system. Additionally, these diagrams within modeling spaces—MOPS, MOSS, and MOAS— have varying levels of importance. Thus, selecting the most relevant diagrams for each stage of the software development lifecycle is essential. As shown in the example of the Item Class Composite Structure Diagram is an application of UML within the appropriate modeling space (MOSS in this case). Ultimately, the strategic use of UML diagrams throughout the software development lifecycle is essential to avoid incorrect or excessive modeling, as they guide, and empower software engineers to create representations of problems, solutions, structures, interactions, and relationships within complex systems. References: Dwivedi, N. (2019, September 9). Type of UML models. Software Design: Modeling with UML [Video]. LinkedIn Learning. https://www.linkedin.com/learning/software-design-modeling-with-uml/types-of-uml-models?u=2245842 GeeksforGeeks (2024, October 23). Unified Modeling Language (UML) diagrams. GeeksforGeeks. https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/ Tim. (2024, November 5). Top 7 most common UML diagram types for software architecture . Icepanel. https://icepanel.io/blog/2024-11-05-top-7-most-common-UML-diagram-types Udacity (2015, February 23). Composite structure diagram [Video]. YouTube. https://www.youtube.com/watch?v=pJyuKhD86Ro Unhelkar, B. (2018 a). Chapter 2 - Review of 14 Unified Modeling Language diagrams. Software engineering with UML. CRC Press. ISBN 9781138297432 Unhelkar, B. (2018 b). Chapter 3 - Software projects and modeling spaces: Package diagrams. Software engineering with UML . CRC Press. ISBN 9781138297432

  • Texture Mapping in Computer Graphics - WebGL

    This article explains how texture mapping in computer graphics applies 2D images to 3D models. It also discusses advanced techniques like environment and bump mapping, which improve surface detail using minimal computational resources. Alexander S. Ricciardi September 25, 2024 In computer graphics, texture mapping is a technique used to apply 2D images to the surface of 3D models. This technique allows the addition of complex patterns, detailed color schemes, and surface characteristics to 3D models’ surfaces without the overhead of implementing complex geometric to mimic intricate surfaces. Therefore, reducing computational cost while still enhancing the visual detail of objects and realism in scenes. On a side note, a similar technique is applying 3D textures onto 3D models; this technique is usually referred to as volume texturing; which could also be called texture mapping. However, the main difference between the two techniques is that 2D texture mapping applies flat images onto the surface of a 3D model, see Figure 1, while 3D texture mapping applies volumetric textures (3D Textures) throughout the 3D space of the model. Figure 1 2D Textue Applied to a 3D Box Note: The Quaker cereal box is an illustration of how a 2D texture is applied to a 3D object When using APIs such as WebGL, texture mapping is applied in several steps, such as creating a texture object and binding it to the 3D model, loading the texture image, assigning coordinates to the texture object, and then applying it to the object or model. Below are examples of JavaScript and GLSL WebGL code that illustrate the steps mentioned above. Step-1 Creating and binding the texture object. The texture object is created and stored in the GPU memory. WebGL uses functions like gl.createTexture() and gl.bindTexture() to create and bind the texture object. JavaScript var texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, texture); Step-2 Loading the Texture Image. The image is loaded into RAM by storing it in an image object, usually, the image is a PNG or JPEG. Then the image data is transferred to the GPU memory. JavaScript var image = new Image(); image.src = 'texture.png'; gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image); Step-3 Assigning Texture Coordinates (UV Mapping). In this step, the vertexes of the 3D model are assigned the texture telling the renderer what part of the texture corresponds to each point on the object’s surface. The shape below is a quadrilateral JavaScript var texCoords = [ vec2(0, 0), // bottom-left corner vec2(0, 1),  // top-left corner vec2(1, 1), // top-right corner vec2(1, 0)  // bottom-right corner ]; Step-4 Applying the Texture. This step is done in the fragment shader. The texture is applied to the surface of the object based on the interpolated texture coordinates. GLSL uniform sampler2D uTextureMap; // sampler for a 2D texture, holds the // texture data in vec2 vTexCoord; // Input texture coordinates passed from the vertex shader // used to look up the color from the texture // The final color out vec4 fragColor; void main() {    /* samples the texture at vTexCoord and assigns the resulting color to fragColor. The texture() function fetches the color from the uTextureMap. */ fragColor = texture(uTextureMap, vTexCoord); } In a few lines of code, APIs such as WebGL allow programmers to implement texture mapping which enhances significantly the detail of object surfaces without the overhead of complex geometry. It enhances the realism of models by simulating details like wood grain, bricks, or fabric patterns with an image. For example, a plain, untextured cube would look flat, but the same cube with texture could appear to have detailed brick patterns, with color variations and even surface roughness. Additionally, texture mapping allows for advanced effects such as environment mapping simulating reflections on shiny surfaces like metal or water. “Highly reflective surfaces are characterized by specular reflections that mirror the environment. [...] We can, however, use variants of texture mapping that can give approximate results that are visually acceptable through environment maps or reflection maps” (Angle & Shreiner, 2020, p197). Note that environment mapping is a technique that converts 3D environment information into a 2D texture format Furthermore, bump mapping, a 2D mapping technique used to create the illusion of surface irregularities and roughness by altering the normal vectors of a surface during shading. It gives the illusion of depth and texture without modifying the geometry. Bump mapping will show changes in shading as the light source or object moves, making the object appear to have variations in surface smoothness by using a grayscale texture (a bump map) to adjust the surface normals during lighting calculations (Angle & Shreiner, 2020). To summarize, in computer graphics, texture mapping is a technique that enhances the visual realism of 3D objects by applying 2D images to their surfaces. Furthermore, advanced 2D texture mapping techniques like environment mapping and bump mapping allow for further intricate surface detail, all while keeping the computational costs low by avoiding additional geometric complexity. All of these can be achieved with just a few lines of code using abstraction layers provided by APIs such as WebGL. References: Angel, E., & Shreiner, D. (2020). Chapter 7: Texture mapping.  Interactive computer graphics. 8th edition . Pearson Education, Inc. ISBN: 9780135258262

  • OSI Model, TCP/IP Framework, and Network Topologies Explain

    This article provides a detailed explanation of the OSI model and its seven layers. It also explores the TCP/IP model, compares it to the OSI framework, and examines network topologies, their characteristics, advantages, and applications. Alexander S. Ricciardi December 12, 2024 In networking,the Open Systems Interconnection (OSI) model is a standardized reference framework that describes how data flows in networks or how networked devices communicate with each other. In 1977 the International Organization for Standardization (ISO) developed OSI to standardize the interoperability of multivendor communications systems into one cohesive model (uCertify, 2019 a). The OSI Model is a reference model, it is not a reverence model (Wallace, 2020). In other words, the model does not need to be revered as a framework where every network component or device must neatly fit. However, it can be used as a tool to explain and understand where different network components or devices reside. This makes the model very useful for diagnosing and fixing network issues as it helps isolate problems within its different layers.  The OSI model is composed of seven layers: Layer 1: The physical layer Layer 2: The data link layer Layer 3: The network layer Layer 4: The transport layer Layer 5: The session layer Layer 6: The presentation layer Layer 7: The application layer Note that the application layer is the last in the OSI queue, as it is the closest to the user. However, graphically the layers are usually represented as a stack, bottom-up, as illustrated in Figure 1. Figure 1 OSI Layers Note: From “The OSI reference model. CompTIA Network+ Pearson N10-007”, Figure 2.2, by uCertify. (2019 a). Each layer represents a different network functionality as shown in Figure 2. Figure 2 OSI vs. TCP/IP Note: From “Objective 1.01 Explain, compare, and contrast the OSI layers” by vWannabe (n.d.). In Figure 2, the OSI stack is compared to the TCP/IP stack model, which is a reference model based on the TCP/IP protocol suite. The TCP/IP model is used to describe communications on the Internet and simplifies the OSI layers into four categories which are Network Interface (Network access layer), Internet (Internet layer), Transport (Host-to-Host layer), and Application (Process/Application layer) see Figure 3. Figure 3 OSI and TCP/IP Note: From “The OSI reference model. CompTIA Network+ Pearson N10-007”, Figure 2.15, by uCertify. (2019 a). The TCP/IP layers map to the OSI layers as follows: Network Interface: Combines the physical and data link layers of the OSI model. Internet: Corresponds to the network layer of the OSI model. Transport: Maps directly to the transport layer of the OSI model. Application: Consolidates the session, presentation, and application layers of the OSI model. As shown above, for me, the OSI model is a great tool for understanding network systems and diagnosing issues. When connected to the TCP/IP model, it provides practical insights into troubleshooting and understanding Internet systems, which is where most of today's networks operate. Another important concept to understand is network topology. Topology classifies the arrangement of devices and connections within a network, either physically (physical topology) or logically (logical topology). Below is an illustration of the most common topologies: Figure 4 Network Topologies Note: From “Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007,“ various Figures, by uCertify (2019 b). Modify. The table below describes the characteristics, advantages, and limitations of various topologies. Table 1 Network Topologies Note: Data from “Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007“ by uCertify (2019 b). As shown in Table 1, each topology has its pros and cons depending on the needs, budget, and future goals of a business. One topology may be more suitable than another. Below, Table 2  is a comparison between Star and Generic Mesh Topologies, that showcases their advantages, disadvantages, and the types of business applications or use cases they are best-suited for. Table 2 Comparison of Star and Generic Mesh Topologies Note: Data from “Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007“ by uCertify (2019 b). As shown in Table 2 a star topology is better suited for Small-to-medium businesses due to its low cost as mesh topology is better suited for large enterprise networks, data centers, and IoT networks that require fault tolerance. To summarize, the OSI model is a foundational framework for understanding network communication and diagnosing connection issues. It is particularly helpful when used in conjunction with the TCP/IP model to troubleshoot and understand modern Internet systems. Additionally, network topology helps to define the structure of networks by setting the arrangement of devices and connections, both physically and logically, enabling businesses to select the most suitable configuration based on their specific needs and goals. References: uCertify. (2019 a). Lesson 2: The OSI reference model. CompTIA Network+ Pearson N10-007 (Course & Labs) [Computer software]. uCertify LLC. ISBN: 9781616910327 uCertify. (2019 b). Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007 (Course & Labs) [Computer software]. uCertify LLC. ISBN: 9781616910327 vWannabe (n.d.). Objective 1.01 Explain, compare, and contrast the OSI layers. vWannabe.com . https://vwannabe.com/2013/07/29/objective-1-01-explain-compare-and-contrast-the-osi-layers/ Wallace, K. (2020, December 11). Networking foundations: Networking basics [Video]. LinkedIn Learning. Retrieved from: https://www.linkedin.com/learning/networking-foundations-networking-basics/a-high-level-look-at-a-network?autoSkip=true&resume=false&u=2245842 .

  • The Relationship Between Software Modeling and Software Development

    This article explores the relationship between Software Modeling (SM) and Software Development (SD) within Software Engineering. It examines how SM, through techniques like UML diagrams, supports and enhances the SD process by improving communication, reducing errors, and providing essential documentation. Alexander S. Ricciardi December 11, 2024 Software Engineering (SE) is the art of engineering high-quality software solutions. The Object Oriented (OO) approach, Software Development (SD) process, and Software Modeling (SM) are components of SE. This article explores these components, more specifically the relationship between SD and SM. How through this relationship software developer teams build systems that are efficient, robust, and provide high-quality software solutions to users. Software Engineering First, let’s define Software Engineering. “The goal of SE is to produce robust, high-quality software solutions that provide value to users. Achieving this goal requires the precision of engineering combined with the subtlety of art” (Unhelkar, 2018, p.1). SE involves a wide range of functions, activities, and tasks such as: Project management, business analysis, financial management, regulatory and compliance management, risk management, and service management functions. Functions are SE teams’ responsibilities or disciplines that often span the entire software development lifecycle. Development processes, requirements modeling, usability design, operational performance, security, quality assurance, quality control, and release management activities. Activities are SE actions taken during certain stages of the software development, they are often performed repeatedly in functions or processes. Tasks are small SE’s actions taken during certain functions or processes. They are often individual steps necessary to perform a specific activity. (Unhelkar, 2018) As shown above, SE is a complex process that can be decomposed into four components which are essential to learn and to adopt SE. These components are fundamentals or object-oriented, modeling (UML standard), process (SDLC, Agile), and experience (case studies and team-based project work).  Figure 1 The Four Essential Components to Adopt Software Engineering Note:  From “Software Engineering Fundamentals with Object Orientation. Software Engineering with UML” by Unhelkar (2018, p.2). Below is a brief definition of each component: Object Oriented (OO) Object Oriented is the concept of object orientation based on Object Oriented Programming (OOP) languages such as Java and Python. OO is composed of six fundamentals (encapsulation, inheritance, polymorphism, abstraction, composition, and association) that help in creating classes and programs that process and manipulate data and objects.  These OO fundamentals are as follows: Classification (grouping) Abstraction (representing) Encapsulation (modularizing) Association (relating) Inheritance (generalizing) Polymorphism (executing) (Unhelkar, 2018, p.5) Software Modeling (SM) Software Modeling is a project modeling standard based on the Unified Modeling Language (UML) that is used the create diagrams to improve communication and participation from all project stakeholders. This also improves the quality of the software, reduces errors, and encourages easy acceptance of the solution by users. UML's purpose in SE is modeling, developing, and maintaining software. Figure 2 Purpose of Unified Modeling Language in Software Engineering Note: From “Software Engineering Fundamentals with Object Orientation. Software Engineering with UML” by Unhelkar (2018, p.13). UML in modeling purpose modeling, developing, and maintaining software can be listed as follows: Visualizing: The primary purpose of the UML is to visualize the software requirements, processes, solution design, and architecture. Specifying: UML is used to facilitate the specification of modeling artifacts. For example, a UML class diagram can specify/describe the attributes and methods of a class, along with their relationships. Constructing: UML is used for software construction because it can be easily translated into code (e.g., C++, Java). Documenting: UML diagrams can be used as detailed documentation for requirements, architecture, design, project plans, tests, and prototypes. Maintaining: UML diagrams are an ongoing aid for the maintenance of software systems. Additionally, they provide a visual representation of a project's existing system, architecture, and IT design. This allows the developer to identify the correct places to implement changes and understand the effect of their changes on the software functionalities and behaviors. (Unhelkar, 2018) Process or Software Development (SD) Process or Software Development is the process that defines activities and phases, as well as providing directions to the different teams of designers and developers throughout the Software Development Lifecycle (SDLC). Methodologies, such as Waterfall and Agile are used to guide and give structure to the development process. Additionally, they help the development teams to complete projects in an efficient manner, meet standards, and meet user requirements. The waterfall methodological approach is linear and plan-driven whereas the Agile methodological approach is more flexible and adaptive. These approaches are usually structured around 5 key components: Requirements gathering and analysis Design and architecture Coding and implementation Testing and quality assurance Deployment and maintenance (Institute of Data, 2023, p.2) Experience  Experience or case studies and team-based project work is the process of learning a project's best approaches and solutions through experimenting with UML and object-oriented fundamentals. “Experience in creating UML models, especially in a team environment, is a must for learning the art of SE” (Unhelkar, 2018, p.2) How Software Modeling Supports Software Development As described above, SD and SM are components of SE playing different roles in SDLC. The difference between SD and SM resides in SD being the methodological process that guides the creation and development of software, and SM being the representation of the software's architecture and functionality through diagrams based on UML. SM's primary role is to support SD by providing a visual representation of the project, reducing errors and scope creep, as well as providing documentation: Project visualization: UML diagrams, especially case diagrams, allow stakeholders to visualize program functionality and behaviors at a high level (Fenn, 2017). This helps teams to focus on where they need more requirements, details, and analysis. It supports the SM phase of requirements gathering and analysis. Reducing errors and scope creep: Software modeling can provide a clear model of the project by serving as a reference for the project requirements, minimizing errors, misunderstandings, and scope creep, particularly during the early stages of the software development process (Fenn, 2017). Scope creep is expanding or adding to the project requirements or objectives beyond the original scope. It supports the SM phase of design and architecture.  Providing documentation: UML diagrams can serve as living documentation for the project, describing the project functionality and behaviors as it is developed and after deployment. This documentation can help with decision-making for functionality/behavior implementation and maintenance of the software. It supports the SM phases of “coding and implementation” and “deployment and maintenance.” By applying the concept listed above SM helps the SD process to create efficient, robust, high-quality software solutions that provide value to users.  UML Example The following is a UML Class diagram of a simple banking manager Java program that utilizes the swing library, a graphical user interface (GUI) library. The program manages bank accounts and checking accounts with various functionalities such as creating accounts, attaching checking accounts, depositing and withdrawing funds, and viewing account balances.   In UML, class diagrams are one of six types of structural diagrams. Class diagrams are fundamental to the object modeling process and model the static structure of a system. Depending on the complexity of a system, you can use a single class diagram to model an entire system, or you can use several class diagrams to model the components of a system. Class diagrams are the blueprints of your system or subsystem. You can use class diagrams to model the objects that make up the system, to display the relationships between the objects, and to describe what those objects do and the services that they provide. (IBM, 2021) Figure 3 UML Class Diagram Example Note: From “Module-4: Portfolio Milestone” By Ricciardi (2024, p.5) To summarize, SE is the art of engineering high-quality software solutions through OO, SD, and SM. SM helps the SD process by providing clear visual representations of system requirements and architecture, reducing errors, minimizing scope creep, improving communication among stakeholders, and serving as living documentation throughout the software development lifecycle. References: Fenn, B. (2017, October). UML in agile development. Control Engineering, 64(10), 48. https://csuglobal.idm.oclc.org/login?qurl=https%3A%2F%2Fwww.proquest.com%2Ftrade-journals%2Fuml-agile-development%2Fdocview%2F2130716718%2Fse-2%3Faccountid%3D38569 IBM (2021, May 5) Rational Software Modeler 7.5.5. IBM. https://www.ibm.com/docs/en/rsm/7.5.0?topic=structure-class-diagrams Institute of Data (2023, September 5). Understanding software process models: What they are and how they work. Institute of Data. https://www.institutedata.com/us/blog/understand-software-process-models/ Ricciardi, A. (2024, July 7). Module-4: Portfolio Milestone. CSC372: Programming 2. Depart of Computer Science. Colorado State University Global. https://github.com/Omegapy/My-Academics-Portfolio/blob/main/Programming-2-CSC372/Module-4%20Portfolio%20Milestone/Module-4%20Portfolio%20Milestone.pdf Unhelkar, B. (2018). Software engineering fundamentals with object orientation. Software engineering with UML. CRC Press. ISBN 9781138297432

  • The Role of Probability in Decision-Making: A Blackjack Case Study

    This article examines the concept of probability as a tool for quantifying uncertainty and making informed decisions, using the game of Blackjack as an example. By applying probability principles such as conditional probability, dependency, and Bayes' Theorem, it demonstrates how mathematical methods can evaluate risks, predict outcomes, and guide strategic choices in uncertain scenarios. Alexander S. Ricciardi November 17, 2024 Uncertainty, by definition, is a nebulous concept; it encapsulates the unknowns and ambiguities. Probability plays a crucial role in quantifying uncertainty, helping establish degrees of belief –percentages– in the likelihood of an outcome or outcomes in a given scenario or set of scenarios. This paper explores the concept of probability by applying it to a easy to understand example involving the game of Blackjack. Probability Probability is the likelihood of something happening. It can also be defined as a mathematical method used to study randomness. In other words, probability is a mathematical method that deals with the chance of an event occurring (Illowsky et al., 2020). This section describes some of the fundamental concepts of probability. Starting with the concept of sample space, often donated Ω. It is the set of all possible outcomes from a scenario or a set of scenarios. An event denoted E or ω is a subset of the sample space, it consists of one outcome or multiple outcomes. In probability theory, the probability of a specific possible outcome donated P(E), from the sample space is a value between 0 and 1, inclusive (Russell &Norvig, 2021). With a ‘0’ probability meaning the outcome will never occur, a ‘1’ probability meaning the outcome will always occur, and a value between ‘0’ and ‘1’ meaning the likelihood of the outcome, the higher values the greater likelihood. This can be formulated as follows: and The probability method comes with a set of rules, proprieties, laws, and theorems that are fundamental principles used for computing the likelihood of events occurring. Below is a list of some of these rules, proprieties, principles, and theorems: A and B are events. - Addition rule: Computes the probability of either one of two events from occurring (Data Science Discovery, n.d.). For mutually exclusive events (events that cannot occur simultaneously): P(A ∪ B) = P(A ∨ B) = P(A) + P(B) For non-mutually exclusive events: P(A ∪ B) = P(A ∨ B) = P(A) + P(B) - P(A ∧ B) - Multiplication rule: Computes “the joint probability of multiple events occurring together using known probabilities of those events individually” (Foster, n.d., p.1). For independent events (the occurrence of one does not affect the other): P(A ∩ B) = P(A ∧ B) = P(A) ∙ P(B) For dependent events:(the occurrence of one does affect the other) P(A ∩ B) = P(A ∧ B) = P(A) ⋅ P(B|A) P(B|A) : conditional probability (see below). - Complement rule: “The complement of an event is the probability that the event does not occur” (Elberly College of Science, n,d., Section 2.1.3.2.4). P(¬A) = 1 - P(A) - Conditional probability: It is the probability of an event occurring given that another event has already occurred. The probability of A given B: - The Bayes’ Theorem: Computes the reverse of the conditional probability. It updates the probability of an outcome based on new evidence. It can also be defined as Where: P(A) is the prior probability of an event A. P(B) is the probability of an event B. P(B|A) is the probability of an event B occurring given that A has occurred. P(A|B) is the probability of an event A occurring given that B has occurred. (Dziak, 2024) These rules, proprieties, principles, and theorems provide a range of tools to solve probabilistic problems in simple scenarios such as rolling dice and card games like Blackjack. Blackjack Scenario Let's explore a Blackjack scenario when the dealer is showing a 10. In Blackjack, specific rules dictate when to take a hit or stand, especially when the dealer is showing a 10, that is the house hand is showing a 10. Suppose a single deck is in play, and four cards are already dealt. If you have a 10 and a 7 visible, and the dealer shows a 10, let's calculate the probability that the dealer's hidden card is an 8, 9, or a face card, and why it makes sense to hit on a 16 but to stand on a 17. In this Blackjack scenario, the concepts of dependency and conditional probability play an essential role in calculating the probabilities. Two events are said to be dependent if the outcome of one event affects the probability of the other. In this scenario, the events are dependent because the cards are drawn without replacement being placed. This means that each card dealt to the player or the house hands changes the composition of the deck and thus affects the probabilities of future events. Analyses of the Scenario Now that the probability methods have been established let’s analyze the problem in more detail. In blackjack, the goal of a player is to finish the game with a higher hand than that of the house, without exceeding 21, as going over 21 is known as ‘busting’ and it is an automatic loss (Master Traditional Games, n.d.). The face cards have a value of 10 and Ace can either be treated as 1 or 11 with the player choosing the value. The player and the house can either hit or stand, the player or players go first, and after all the players stand the house goes next. Note that all players are playing against the house, not each other, and if the house’s hand matches a player’s hand it results in a draw between the player and the house. A standard deck of cards has 52 cards. The player's hand has a 10 and a 7, totaling 17. The house has a 10 as the dealer up-card, and a fourth card is on the table, the dealer hole-card. Therefore 3 card values are known and 52-3=49 cards are unknown. The scenario calls to calculate the probability of the house's other card, the dealer hole-card, being an 8, 9, or face card, as any of those cards would give to house a better hand than the player. A standard deck has 4 8s, 4 9s, 4 Jacks, 4 Queens, and 4 Kings. Mathematically, this can be translated to: 4(8s) + 4(9s) + 4(Jacks) + 4(Queens) + 4(Kings) = 20 house favorable cards. This means that from a set of 49 unknown cards 20 of those cards are favorable to the house. Thus, the probability that the house's other card, the dealer hole-card, is one of the house's favorable cards is: This means that the probability of the house having a better hand than the player is 40.82% when considering only the 8, 9, or face cards as the possible cards on the table. On a side note, the Ace card was not considered in this scenario, and an Ace can be treated as a 1 or 11. If the dealer hole card is Ace then the house’s hand would be 10 + 11 = 21, Blackjack. A card deck has 4 Aces, additionally, the 2 remaining 10 cards were also not considered in this scenario, this changes the number of house favorable cards to 20 + 4(Aces) + 2(10) = 26 and the probability to: This considerably improves the probability of the house having a better hand than the player from 40.82% to 53.06%. Why It Makes Sense to Hit on a 16 but Stand on a 17? The scenario claims that it makes sense to hit on a 16 but to stand on a 17; if the house will stand on a 17 and above. Let’s explore the scenario where the player has a hand of 16. If the player decides to hit, they can improve their hand by drawing a 1 (Ace), 2, 3, 4, or 5, that is a total of 5 types of cards out of a set of 13 types of cards (Ace through King) are favorable to the player with a hand of 16. Note that this calculation is not based on the number of cards in a deck but on the number of types of cards found in a deck, 13, rather than the total number of cards left in the deck, as the specific card composition of the player’s hand of 16 is unknown. Therefore, the probability of the player hitting a favorable type of card is: Thus, if the house’s hand is a 17 and above, it makes sense for the player to hit as they have an approximately 38.46% chance to draw a favorable card. If the player does not hit, they will automatically lose since the house has a better hand. Now let’s explore the scenario where the player has a hand of 17. If the player decides to stand, they can improve their hand by drawing a 1 (Ace), 2, 3, or 4, a total of 4 types of cards out of set of 13 types of cards are favorable to the player with a hand of 17. Therefore, the probability of the player hitting a favorable card is: This means that a house with a hand of 17 above has a 69.77% and above chance of hitting an unfavorable card and bursting out. Thus, if the house’s hand is 17 or above, it makes sense for the player to stand, as the house has approximately a 69.77% or higher chance of drawing an unfavorable card and busting if it decides to hit. This is likely because the house typically plays against multiple players who may have better hands than the house. Conclusion Probability plays a crucial role in quantifying uncertainty; it helps establish the likelihood of an outcome or outcomes in a given scenario or set of scenarios. This paper explored the concept of probability, by applying its principles to a practical example, the game of Blackjack. This simple example shows how powerful the concept of probability can be; by demonstrating how probability can be used to evaluate risks, calculate potential outcomes, and make strategic choices. Probability as a tool for making decisions, can be apply not only in games but also in various real-world situations where uncertainty is a factor. References: Data Science Discovery (n.d.). Multi-event probability: Addition rule. University of Illinois at Urbana-Champaign (UIUC). https://discovery.cs.illinois.edu/learn/Prediction-and-Probability/Multi-event-Probability-Addition-Rule/#Addition-Rule-Formula Dziak M. Bayes’ theorem. Salem Press Encyclopedia of Science. 2024. Accessed November 18, 2024. https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip,uid&db=ers&AN=89142582&site=eds-live Elberly College of Science (n.d.). 2: Describing data, part 1. STAT 200: Elementary statistics. Department of Statistics, PennState Elberly College of Science. https://online.stat.psu.edu/stat200/lesson/2/2.1/2.1.3/2.1.3.2/2.1.3.2.4 Foster, J. (n.d.). Multiplication rule for calculating probabilities. Statistics By Jim. https://statisticsbyjim.com/probability/multiplication-rule-calculating-probabilities/ Illowsky, B., Dean, S., Birmajer, D., Blount, B., Einsohn, M., Helmreich, J., Kenyon, L., Lee, S., & Taub J. (2020, March 27) 1.1 Definitions of statistics, probability, and key terms. Statistics. OpenStax. https://openstax.org/books/statistics/pages/preface Master Traditional Games (n.d.) The rules of Blackjack, Master of the Games. https://www.mastersofgames.com/rules/blackjack-rules.htm?srsltid=AfmBOoojETz5j0oD9X_OW-mIYhepbOfCZm3sH6Z4o2klRDmMLHYO6s5m Russell, S. & Norvig, P. (2021). 12.2 Basic probability notation. Artificial intelligence: A modern approach. 4th edition. Pearson Education, Inc. ISBN: 9780134610993; eISBN: 9780134671932.

  • AI and Chess: Shaping the Future of Strategic Thinking and Intelligence

    This article explores the evolving relationship between Artificial Intelligence (AI) and chess, highlighting how AI has transformed chess strategy and player training, while chess has contributed to the advancement of AI technologies. Alexander S. Ricciardi October 8, 2024 Artificial Intelligence (AI) and the game of chess have an ongoing relationship that began in 1990s and gained prominence when human chess champions and AI faced each other, notably in 1997 when IBM's Deep Blue defeated chess champion Garry Kasparov, putting AI capabilities into the public spotlight (Martin, 2024). This influenced the evolution of chess strategy over the past few decades by bringing a better understanding of the game (Deverell, 2023). AI is now a tool for game analysis and player training, making chess more popular than ever. For it is part, the game influenced the evolution of AI. The Deep Blue supercomputer was able to evaluate around 200 million chess positions per second, about the capacity to look between 12 to 30 moves ahead, this gave the AI a greater tragical insight than its human counterpart (Cipra, 1996). The Deep Blue supercomputer was an example of good old-fashioned AI which uses heuristic reasoning. In other words, it is an example of symbolic planning AI or narrow AI also called an expert AI which can only play chess and operates based on pre-programmed functions and search algorithms. This AI model can also be defined as a model-based utility-based agent, see Figure 1. A model-based utility-based agent has an internal model of the chess environment and utilizes a utility function(s) to evaluate and choose actions with the goal of winning the game.   Figure 1 Model-based Utility-based Agent Note: From “2.4 The Structure of Agents. Artificial Intelligence: A Modern Approach” Figure 2.14 by Russell & Norvig (2021, p.55) Note that: “A model-based utility-based agent. It uses a model of the world, along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to the best expected utility, where expected utility is computed by averaging over all possible outcome states, weighted by the probability of the outcome” (Russell & Norvig, 2021, p.55). More modern AI chess expert models deep neural network use such as Google AlphaZero uses deep neural network model combined with Monte Carlo tree search. A deep neural network is a type of machine learning model that uses an artificial network, a program that tries to mimic the structure of the human brain (Rose, 2023). A machine learning model is a program that learns through supervised, or unsupervised learning or a combination of both. It can be defined as a type of learning agent that improves its performance on a specific task by learning from data, rather than being explicitly programmed with fixed rules (Russell & Norvig, 2021). Google AlphaZero, notably its predecessor AlphaGo, learned to play the game of Go by playing millions of games against itself, using deep neural networks and backpropagation. Backpropagation is a training algorithm used in neural networks that adjusts the weights of the connections between neurons by propagating the error of the output backward through the network. In 2016, AlphaGo beat Lee Sedol Go champion using novel and brilliant moves, such as move 37,  “— a move that had a 1 in 10,000 chance of being used. This pivotal and creative move helped AlphaGo win the game and upended centuries of traditional wisdom ” (DeepMind, n.d., p.1). This changes how players play the game of Go and chess, players now study AI-generated strategies and incorporate them into their own gameplay, the expert AIs are now the teachers. On a side note: Large Language Models such as GPT-4o, o1, Anthropic Claude Sonnet 3.5, Gemini 1.5, and Grok 2 are more generalist models with strong language abilities and are not well suited for playing chess and the game of Go. However, based on the Transformers architecture which relies on self-attention, meaning the model weighs the importance of different parts of the input data when making predictions, all that may be needed is scaling in computing power, data, time, and the use of Chain-of-Thought Reasoning for these models to potentially achieve the same levels in chess and Go as the expert AI models. Furthermore, many are predicting that Artificial General Intelligent (AGI), see Figure 2, will be achieved by 2030. Figure 2 The ANI-AGI-ASI Train Note: The illustration is a metaphor that depicts the rapid advancement of AI technology, progressing from Artificial Narrow Intelligence (ANI), which is less intelligent than human-level intelligence, to Artificial General Intelligence (AGI), which is equivalent to human-level intelligence, and to Artificial Super-Intelligence (ASI), which surpasses human intelligence. From “The AI revolution: The road to superintelligence Part-2”, by Urban, 2015. To summarize, AI's relationship with chess has transformed the game itself and made the game more popular than ever, but it has also contributed significantly to the advancement of AI technologies. Deep Blue and more modern models like AlphaZero are the children of this relationship. Moreover, as AI continues to evolve, there is potential for even generalist models, like large language models combined with scaling in computing power, data, time, and the use of Chain-of-Thought Reasoning, to reach the same level of strategic thinking as expert AI not only in strategic games such as chess but also in other fields such as advanced physics and mathematics, potentially surpassing human abilities, if it is not already the case, ultimately opening the door to AGI and subsequently to ASI. References: Cipra, B. (1996, February 2). Will a computer checkmate a chess champion at last. Science, 271 (5249), p.599. Retrieved from https://www.proquest.com/docview/213567322?accountid=38569&parentSessionId=cOz1dBEdSipk%2FF9km0uBWbuk2pNTreJUZoVBGhGjMxE%3D&sourcetype=Scholarly%20Journals/ DeepMind (n.d.). AlphaGo. Google. https://deepmind.google/research/breakthroughs/alphago/ Deverell, J. (2023, July 6). Artificial intelligence and chess: An evolving landscape. Regency Chess Company. https://www.regencychess.com/blog/artificial-intelligence-and-chess-an-evolving-landscape/ Rand, M. (2024, March 8). To understand the future of AI, look at what happened to chess . Forbes. https://www.forbes.com/sites/martinrand/2024/03/08/to-understand-the-future-of-ai-look-at-what-happened-to-chess/ Rose, D. (2023, October 12). Artificial intelligence foundations: Thinking machines welcome. LinkedIn Learning. https://www.linkedin.com/learning/artificial-intelligence-foundations-thinking-machines/welcome?resume=false&u=2245842 Russell, S. & Norvig, P. (2021). 2.4 The structure of agents. Artificial intelligence: A modern approach . 4th edition. Pearson Education, Inc. ISBN: 9780134610993; eISBN: 9780134671932.  Urban, T. (2015, January 27). The AI revolution: The road to superintelligence Part-2.  Wait But Why . https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html/

  • Taxonomy and Frames in Programming Languages: A Hierarchical Approach to Knowledge Representation

    This article examines the application of taxonomy and frames in programming languages, focusing on Python and Java. It demonstrates how hierarchical taxonomies and frame-based representations organize and define relationships, properties, and attributes, providing a comprehensive approach to knowledge representation in computer science. Alexander S. Ricciardi November 3, 2024 Taxonomy in computer science is the act of classifying and organizing concepts. For example, in software engineering, it is used to classify software testing techniques, model-based testing approaches, and static code analysis tools (Novak et al., 2010). In data management, it is used to organize metadata and categorize-manage data assets (Knight, 2021). In Artificial Intelligence it is used to guide models to recognize patterns in data sets. Taxonomy can also be defined as the act of organizing knowledge within a domain by using a controlled vocabulary to make it easier to find related information (Knight, 2021), and it must: “Follow a hierarchic format and provide names for each object in relation to other objects. May also capture the membership properties of each object in relation to other objects. Have specific rules used to classify or categorize any object in a domain. These rules must be complete, consistent, and unambiguous. Apply rigor in specification, ensuring any newly discovered object must fit into one and only one category or object. Inherit all the properties of the class above it but can also have additional properties.” (Knight, 2021, p.1). In this paper, taxonomic knowledge and frames are implemented in the domain of Programming Languages, focusing on Python. “A frame is a data structure that can represent the knowledge in a semantic net” (Colorado State University Global, n.d., p.2). To implement the taxonomic knowledge the paper follows three steps using first-order logic. The steps are Subset Information, Set Membership of Entities, and Properties of Sets and Entities. Then, the paper uses a tree-like structure to show how subcategories relate to parent categories. Additionally, it demonstrates how the hierarchical taxonomic structure interacts with frames, by illustrating how attributes and properties are defined in the Python frame and they align with the broader taxonomic categories. Finally, it explains how the combination of taxonomic relationships and frames provides a comprehensive representation of knowledge. The Three Steps to Implement Taxonomic Knowledge Note that the programming languages Java and Python are used as examples. Step 1: Subset Information In this step to represent the subcategory, first-order logic is used. Subcategory Relationships: Compiled Languages and Interpreted Languages: ∀x High_Level_Compiled_Language(x) ⇒ Compiled_Language(x) ∀x Scripting_Language(x) ⇒ Interpreted_Language(x) Programming Languages: ∀x Compiled_Language(x) ⇒ Programming_Language(x) ∀x Interpreted_Language(x) ⇒ Programming_Language(x) specific Languages: ∀x Java(x) ⇒ High_Level_Compiled_Language(x) ∀x Python(x) ⇒ Scripting_Language(x) Additional Subcategories: Functional Languages and Logic Programming Languages: ∀x Functional_Language(x) ⇒ Programming_Language(x) ∀x Logic_Programming_Language(x) ⇒ Programming_Language(x) Step 2: Set Membership of Entities In this step, the category membership of specific languages is represented. Instances of Programming Languages: Java SE 23: Java(JavaSE23) Python 3.13: Python(Python3_13) Other Programming Languages (more examples): C(C23) Haskell(Haskell2010) Step 3: Properties of Sets and Entities In this step, the properties of the categories and language program are represented. Properties of Programming Languages: All Programming Languages have Syntax and are used for Software Development: ∀x Programming_Language(x) ⇒ Has_Syntax(x) ∀x Programming_Language(x) ⇒ Used_For(x,"Software_Development") Properties of Compiled and Interpreted Languages: Compiled Languages have Execution Model 'Compiled': ∀x Compiled_Language(x) ⇒ Execution_Model(x,"Compiled") Interpreted Languages have Execution Model 'Interpreted': ∀x Interpreted_Language(x) ⇒ Execution_Model(x,"Interpreted") Properties of Specific Languages: Java has Static Typing Discipline: ∀x Java(x) ⇒ Typing_Discipline(x,"Static") Python has Dynamic Typing Discipline: ∀x Python(x) ⇒ Typing_Discipline(x,"Dynamic") Java Supports Paradigm 'Object-Oriented': ∀x Java(x) ⇒ Supports_Paradigm(x,"Object-Oriented" ) Python Supports Multiple Paradigms: ∀x Python(x) ⇒ Supports_Paradigm(x,"Multi-Paradigm") Properties of Entities: Java SE 23's Latest Version is 23: Latest_Version(JavaSE23,"23") Python 3.13's Latest Version is 3.13: Latest_Version(Python3_13,"3.13") Hierarchical Taxonomy of Programming Languages Below is a (text shorter version) tree-like hierarchical structure representing the relationships between different languages in the domain of programming languages: Programming Language Compiled Language High-Level Compiled Language C C++ Java Rust Low-Level Compiled Language Assembly Language Interpreted Language Scripting Language Python Ruby Perl Shell Scripting Language Bash PowerShell Functional Language Pure Functional Language Haskell Multi-Paradigm Functional Language Scala F# Logic Programming Language Prolog Note that some languages, like Python and Java, can be considered both interpreted and compiled languages; however, for the scope of this exercise, they are categorized as interpreted and compiled languages, respectively. Hierarchical Taxonomy Visualization Figure 1 Hierarchical Taxonomy of Programming Languages Note: The diagram is a visual representation of the hierarchical taxonomy of programming languages. Data adapted from multiple sources: (Epözdemir, 2024; Foster, 2013; Gómez, n.d.; Peter Van Roy, 2008; Saxena, 2024; Startups, 2018; & Wikipedia contributors, 2024) Specific Frame Python Interaction With Hierarchical Taxonomy This section illustrates a Python frame, which is a data structure representation of the Python programming language attributes and properties. For comparison, a Java frame is also provided. Figure 2 Python Frame ) Python Instance_Of: Scripting_Language; // Inherited properties and attributes Used_For: Software_Development Execution_Model: Interpreted Syntax: Easy_To_Use; // Properties and attributes specific to Python Creator: Guido van Rossum; First_Released: 1991; Typing_Discipline: Dynamic, Strong Typing; Paradigms: Object-Oriented, Imperative, Functional, Procedural, Reflective; License: Python Software Foundation License; Latest_Version: 3.13; Official_Website: www.python.org ) Note: This is a frame representation of Python’s properties and attributes. An example of an attribute is ‘Instance_Of’ and of a property is ‘Scripting_Language’. For comparison, below is a representation of the Java frame. Figure 3 Java Frame ) Java Instance_Of: High_Level_Compiled_Language; // Inherited properties and attributes Used_For: Software_Development Debugging: Friendly Execution_Model: Compiled // Properties and attributes specific to Java Creator: James Gosling; First_Released: 1995; Typing_Discipline: Static, Strong Typing; Paradigms: Object-Oriented, Class-based, Concurrent; License: GNU General Public License with Classpath Exception; Latest_Version: 23; Official_Website: www.oracle.com/java/ ) Note: This is a frame representation of Java’s properties and attributes. A hierarchical taxonomy organizes entities into a tree-like structure. In the Programming Language hierarchical taxonomy, the root class (category representing the domain) or the first node of the tree-like structure is ‘ Programming Language ,’ with all other nodes as subclasses (subcategories) that inherit directly or indirectly from the ‘ Programming Language ’ root class. These relationships can be described as “ is an instance of .” For example, all subclasses show the relation “ is an instance of ” ‘ Programming Language ’ such as ‘ High-level Compile Language ’ “ is an instance of ” ‘ Compile Language ’ “ is an instance of ” ‘Programming Language,’ therefore ‘ High-level Compile Language ’ also shows the relationship “ is an instance of ” ‘ Programming Language’ . This relationship is defined by the concept of inheritance where a subclass inherits the properties and attributes of its parent class and grandparent classes. Note that a subclass can have more than one parent class. For example, the parent class ‘ Compiled_Language ’ has a property ‘ Execution_Model ’ with the attribute ‘ Complied ’, the subclass ‘ High_Level_Compiled_Language ’ and all the languages that are children of it will inherit the property ‘Execution_Model’ with the attribute ‘Complied’. This can be translated into first-order logic as follows: ∀x High_Level_Compiled_Language(x) ⇒ Compiled_Language(x) ⇒ Execution_Model(x,"Compiled") Where ' x ' is the instance of a programming language (e.g. Java SE 23) and ' ⇒ ' implies. When exploring the Python frame we can see that one of its attributes is ‘Instance_Of ’ with the property ‘ Scripting_Language ,’ this shows that Python is a subclass of the ‘ Scripting_Language ’ class, therefore Python inherits all the properties and attributes from ‘Scripting_Language’ which are ‘ Syntax: Easy_To_Use ’, ‘ Execution_Model: Interpreted ’, and ‘ Used_For: Software_Development ’. Additionally, the ‘ Syntax: Easy_To_Use ’ is specific to ‘ Scripting_Language .’ On the other hand, ‘Execution_Model: Interpreted’ and ‘Used_For: Software_Development’ are inherited by ‘Scripting_Language’ from ‘Interprated_Language.’ Furthermore, ‘Execution_Model: Interpreted’ is specific to ‘Interprated_Language’ which inherits ‘Used_For: Software_Development’ from ‘Promming_Language.’ This can be translated into first-order logic as follows: ∀x Python(x) ⇒ Scripting_Language(x) ⇒ Syntax(x,Easy_To_Use) ∀x Python(x) ⇒ Scripting_Language(x) ⇒ Interprated_Language(x) ⇒ Execution_Model(x,Interpreted) ∀x Python(x) ⇒ Scripting_Language(x) ⇒ Interprated_Language(x) ⇒ Promming_Language ⇒ Used_For(x,Software_Development) Where ' x ' is the instance of a programming language (e.g. Python 3.13) and ' ⇒ ' implies. The rest of Python’s properties and attributes are specific to it. On a side note, in polymorphism, a subclass can modify (override) the attribute's value of a property inherited from a parent class. For example, a language could inherit ‘ Syntax: Easy_To_Use ’ from ‘ Scripting_Language ’ and modify the attribute ‘ Easy_To_Use ’ to ‘ Hard_To_Use. ’ Frame and Hierarchical Taxonomy Interactions Visualization This section illustrates visually, the interactions between the hierarchical taxonomy and the Java and Python frames. Figure 4 Frame and Hierarchical Taxonomy Interactions (Java and Python) Note: The diagram illustrates the interactions between the hierarchical taxonomy and the Java and Python frames. Only the specific properties and attributes of the subclasses are listed in their node containers as the inherited properties and attributes can be listed in their parent class containers nodes. Data adapted from multiple sources: (Epözdemir, 2024; Foster, 2013; Gómez, n.d.; Peter Van Roy, 2008; Saxena, 2024; Startups, 2018; & Wikipedia contributors, 2024) As shown in Figure 4, combining hierarchical taxonomic relationships and frames creates a powerful tool for representing knowledge. The hierarchical taxonomy illustrated the relationships between categories; for example, the ‘Scripting Language’ category is a subcategory of the ‘Interpreted Language’ which is a subcategory of the ‘Programming Language’ category making the ‘Scripting Language’ a sub-subcategory of the root category ‘Programming Language’ which represents the domain. Additionally, the implementation of frames into the diagram shows the entities’ properties and attributes and how they get inherited from another category. For example, Python’s specific properties and attributes are listed in its node containers, and its inherited properties and attributes are listed in its parent, grandparent, and great-grandparent class node containers. This creates a robust representation of knowledge that provides depth and clarity allowing users to navigate complex relationships effortlessly. References: Colorado State University Global. (n.d.). Module 4: Knowledge Representation [Interactive lecture]. Canvas. Retrieved November 1, 2024, from https://csuglobal.instructure.com/courses/100844/pages/4-dot-2-frames?module_item_id=5183634 Epözdemir, J. (2024, April 10). Programming Language Categories - Jiyan Epözdemir - Medium. Medium. https://medium.com/@jepozdemir/programming-language-categories-6b786d70e8f7 Foster, D. (2013, February 20). Visual Guide to Programming Language Properties. DaFoster. https://dafoster.net/articles/2013/02/20/visual-guide-to-programming-language-properties/ Gómez, R. (n.d.). Alphabetical List of Programming Languages • programminglanguages.info . https://programminglanguages.info/languages/ Knight, M. (2021, March 12). What Is Taxonomy? Dataversity. https://www.dataversity.net/what-is-taxonomy/ Novak, J., Krajnc, A., & Žontar, R. (2010, May 1).Taxonomy of static code analysis tools. IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/5533417 Peter Van Roy. (2008). The principal programming paradigms. https://webperso.info.ucl.ac.be/~pvr/paradigmsDIAGRAMeng108.pdf Saxena, C. (2024, October 17). Top Programming Languages 2025: By Type and Comparison. ISHIR | Software Development India. https://www.ishir.com/blog/36749/top-75-programming-languages-in-2021-comparison-and-by-type.htm Startups, A. (2018, June 20). Choosing the Right Programming Language for Your Startup. Medium. https://medium.com/aws-activate-startup-blog/choosing-the-right-programming-language-for-your-startup-b454be3ed5e2 Wikipedia contributors. (2024, November 3). List of programming languages by type. Wikipedia. https://en.wikipedia.org/wiki/List_of_programming_languages_by_type

  • Truth Tables: Foundations and Applications in Logic and Neural Networks

    This article explores the role of Truth Tables (TTs) in evaluating logical statements by systematically analyzing relationships between propositions, providing examples and foundational concepts in propositional logic. Additionally, it examines innovative applications of TTs in Convolutional and Deep Neural Networks. Alexander S. Ricciardi October 20, 2024 Truth Tables (TTs) evaluate logical statements by systematically analyzing the relationship between the truth and falsehood between propositions within those statements. This essay demonstrates the use of TTs by providing two examples using three propositions and analyzing their logical relationships. It also briefly explores how TT can be used to implement Truth Table networks (TT-net), a Convolutional Neural Network (CNN) model that can be expressed in terms of TTs, and when they are combined with Deep Neural Networks (DNN) they can create a novel Neural Network Framework (NNF) called Truth Table rules (TT-rules). Definition A True Table evaluates all possible truth values returned by a logical expression (Sheldon). The return truth values are binary, meaning that either true or false (not true), they may be referred to as Boolean values. Boolean is a term that represents a system of algebraic notation used to represent logical propositions, usually by means of the binary digits 0 (false) and 1 (true) (Oxford Dictionary, 2005). In Boolean algebra and related mathematics fields, as well as in sciences rely on Boolean logic to show the possible outcomes of a logical expression or operation in terms of its truth or falseness that can be expressed using numbers, characters, or words. In programming languages such as C++ and C any non-zero return Boolean value is considered true; however, in the Java programming language the value can only be of the data type ‘ true ’ or ‘ false ’. On the other hand, TTs usually use the letters ‘ T ’ for true and ‘ F ’ for false to represent truth values. Propositional Logic As mentioned earlier, Boolean values are used to represent logical propositions, a logical proposition also known as an atomic sentence is a sentence that can either be true or false, but not both (James, 2014). Propositional logic (also known as sentential logic or Boolean logic) is the process of forming logical statements by combining logical propositions, also known as complex sentences (Russell & Norvig, 2021). An atomic sentence is represented by a single proposition symbol, such as P , Q , R , or W ₁ ₃ that can be allocated with a true or false Boolean value. For example, P = T , P is true, or P = F , P is false, but never both. To combine the atomic sentences into a logical statement operators like AND (‘∧’), OR (‘∨’), and NOT (‘¬’) are used, as well as symbols to express implications, symbols such as ‘⇒’ for ‘ implies ’ and ‘⇔’ for ‘ if and only if ’. The table below lists the five basic logical operations forming complex sentences using the operators and symbols that were just discussed. Table 1 Basic Logical Operations Note: From “2.2: Introduction to truth tables. Mat 1130 mathematical ideas,” by Lippman (2022), modify. Complex sentences can combine more than one operation. For example, (W ₁₁ ∧ P ₁₃ ) ∨ W ₂₂ ⇔ ¬W ₂₄. Additionally, the operator follows precedence similar to the precedence arithmetic operators, it is as follows ‘¬’, ‘∧’, ‘∨’, ‘⇒’, ‘⇔’, with ‘¬’ having the most precedence (Russell & Norvig, 2021). Additionally, two atomic sentences P and Q are logically equivalent if they are true in the same set of models using the following notation P ≡ Q . A model is a specific assignment of truth values to all the atomic sentences in a logical expression. This equivalence also applies to complex sentences, and it has the following properties: o (P ∧ Q) ≡ (Q ∧ P) — commutativity of ∧ o (P ∨ Q) ≡ (Q ∨ P) — commutativity of ∨ o ((P ∧ Q) ∧ W) ≡ (P ∧ (Q ∧ W)) — associativity of ∧ o ((P ∨ Q) ∨ W) ≡ (P ∨ (Q ∨ W)) — associativity of ∨ o ¬(¬P) ≡ P — double-negation elimination o (P ⇒ Q) ≡ (¬Q ⇒ ¬P) — contraposition o (P ⇒ Q) ≡ (¬P ∨ Q) — implication elimination o (P ⇔ Q)≡ ((P ⇒ Q) ∧ (Q ⇒ P)) — biconditional elimination o ¬(P ∧ Q) ≡ (¬P ∨ ¬Q) — De Morgan o ¬(P ∨ Q) ≡ (¬P ∧ ¬Q) — De Morgan o (P ∧ (Q ∨ W)) ≡ ((P ∧ Q) ∨ (P ∧ W)) — distributivity of ∧ over ∨ o (P ∨ (Q ∧ W)) ≡ ((P ∨ Q) ∧ (P ∨ W)) — distributivity of ∨ over ∧ (Russell & Norvig, 2021, p.222) Examples of Truth Tables This section explores two sentences’ logic: one involving a conditional statement with a negation and conjunction, and the other involving a biconditional statement with disjunction. The sentences in natural language are: If it is sunny and I do not work today, then I will go to the beach. I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours. The first step is to identify the atomic sentences, that are part of the natural language sentences, followed by the complex sentences and the logical operator that combines them, then the last step is to form the table based on the atomic sentence, complex sentences, and logic operators. Note that atomic sentences also called atomic propositions are simple propositions that have no logical content (Lavin, n.d.). Thus, their logical value can be set as false or true to evaluate more complex propositions also called complex sentences. Example 1 Let's start with the first sentence “ If it is sunny and I do not work today, then I will go to the beach. ” The atomic sentences are: o P: “The weather is sunny.” o Q : “I work today.” o R : “I will go to the beach.” The logic operators are: o ‘¬’ (not). o ‘∧’ (and). o ‘⇒’ (Conditional). Note that ‘⇒’ corresponds to the terms ‘implies’ or “if then’. The complex sentences are: o ¬Q o P ∧ ¬Q o (P ∧ ¬Q) ⇒ R Now, let’s make the TT: Table 2 Sentence 1 Truth Table The TT is a world model that explores and evaluates all the possible truth values of the atomic and complex sentences. However, not all the table values bear relevance in proving the logical validity of sentence 1. In other words, irrelevant propositions can be ignored, no matter how many of them there are (Russell & Norvig, 2021). For example, if R is false, “I am not going to the beach” regardless of whether P and Q are true or false, making the proposition R is false irrelevant in determining the validity of sentence 1. Additionally, to prove the logical validity of sentence 1 both (P ∧ ¬Q) and R propositions need to be true and both P and ¬Q propositions need to be also true. This concept is similar to coding an 'if' statement in a programming language, where two conditions combined with the logical 'and' operators must both be true for the code after the 'then' clause to execute; for instance, ‘if (A && B) then print(“ A and B are both true”);’. Note that the proposition “print(“A and B are both true”);” is always true if (A and B) is true. Thus, the relevant propositions for this example are found in row three of the table: Table 3 Sentence 1 Truth Table Row 3 o P: T — “The weather is sunny” is true. o Q: F — “I work today” is false. o ¬Q: T — “I will go to the beach” is true. o R: T — “I do not work today” is true. o P ∧ ¬Q: F — “It is sunny and I do not work” is true. o (P ∧ ¬Q) ⇒ R: T — “If it is sunny and I do not work today, then I will go to the beach” is true. Therefore, the sentence “If it is sunny and I do not work today, then I will go to the beach” is logically sound. Example 2 Now, let's explore the sentence “ I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours. ” The atomic sentences are: o P: “I complete all homework assignments.” o Q: “I study for at least 10 hours.” o R: “I will pass the exam.” The logic operators are: o ‘∨’ (or) o ‘⇔’. Note that ‘⇔’ corresponds to the term “if and only if’. The complex sentences are: o P ∨ Q o (P ∨ Q) ⇒ R Now, let’s make the TT: Table 4 Sentence 2 Truth Table As in example 1, the TT is a world model that explores and evaluates all the possible truth values of atomic and complex sentences. However, not all the table values bear relevance in proving the validity of sentence 2. Both (P V Q) and R expressions need to be true to prove that the sentence is logically valid. Additionally, only one of the atomic sentences in the proposition (P V Q) for the proposition to be true. Thus, the relevant propositions for this example are found in rows one, three, and five of the table: Table 5 Sentence 2 Truth Table Row 1, 2, and 3 o P: T — “I complete all homework assignments” is true. o P : F — “I complete all homework assignments” is false. o Q : T — “I study for at least 10 hours” is true. o Q: F — “I study for at least 10 hours” is False. o R : T — “I will pass the exam” is true. o P ∨ Q : T — “I complete all homework assignments (false) or I study for at least 10 hours (true)” is true. o P ∨ Q : T — “I complete all homework assignments (true) or I study for at least 10 hours (false)” is true. o (P ∧ ¬Q) ⇒ R : T — “I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours” is true. Therefore, the sentence “I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours” is logically valid. Applications TTs have many applications in mathematics and science. A recent proposed application by Benamira et al. (2023b) suggests using them within Convolutional Neural Networks (CNNs) to create a novel CNN architecture called Truth Table net (TT-net). In traditional CNNs, researchers do not have a clear insight into how the network makes decisions, making CNNs “black boxes.” TT-net architecture will make it easier for researchers to understand and interpret how the CNN makes decisions. After training, TT-net can be analyzed and understood using Boolean decision trees, Disjunctive/Conjunctive Normal Form (DNF/CNF), or Boolean logic circuits. This will allow the researchers to map the decision-making process of the CNN. A similar proposed application of TTs by Benamira et al. (2023a) suggests using them as a framework called Truth Table rules (TT-rules) within Deep Neural Networks (DNNs). TT-rules is based on the TT-net architecture with the goal of making DNNs less of a “black box” and more interpretable by transforming the DNN-trained models into understandable rule-based systems using TTs. Summary Truth Tables are a useful tool to prove the logical validity of sentences using Boolean values. They are world models that explore and evaluate all the possible truth values of atomic and complex sentences. In other words, they help evaluate logical propositions by breaking down complex sentences into atomic sentences and analyzing all possible combinations of truth values within the proposition, as shown in examples 1 and 2. They have many applications in mathematics and science; proposed applications in CNNs and DDNs would involve using TTs to make the models less of a "black box" by making their decision-making processes more transparent, and interpretable. References: Benamira, A., Guérand, T., Peyrin, T., & Soegeng, H., (2023a, September 18). Neural network-based rule models with truth tables. arXiv. http://arxiv.org/abs/2309.09638 Benamira, A., Guérand, T., Peyrin, T., Yap, T., & Hooi, B. (2023b, February 2). A scalable, interpretable, verifiable & differentiable logic gate convolutional neural network architecture from truth tables. arXiv. http://arxiv.org/abs/2208.08609 James, J. (2014). Math 310: Logic and Truth Tables [PDF]. Minnesota State University Moorhead. Mathematics Department. https://web.mnstate.edu/jamesju/Spr2014/Content/M310IntroLogic.pdf Lavin, A. (n.d.). 7.2: Propositions and their Connectors. Thinking well - A logic and critical thinking textbook 4e (Lavin). LibreTexts Humanities. https://human.libretexts.org/Bookshelves/Philosophy/Thinking_Well_-_A_Logic_And_Critical_Thinking_Textbook_4e_(Lavin)/07%3A_Propositional_Logic/7.02%3A_Propositions_and_their_Connectors#:~:text=Atomic%20propositions%20are%20sometimes%20longer,a%20simple%20or%20atomic%20proposition . Lippman, D. (2022) 2.2: Introduction to Truth Tables. Mat 1130 mathematical ideas. Pierce College via The OpenTextBookStore. https://math.libretexts.org/Courses/Prince_Georges_Community_College/MAT_1130_Mathematical_Ideas_Mirtova_Jones_(PGCC:_Fall_2022)/02:_Logic/2.02:_Introduction_to_Truth_Tables Oxford Dictionary (2006). The Oxford dictionary of phrase and fable (2 ed.). Oxford University Press. DOI: 10.1093/acref/9780198609810.001.0001. Russell, S. & Norvig, P. (2021). 7. Logical Agent. Artificial intelligence: A modern approach. 4th edition. Pearson Education, Inc. ISBN: 9780134610993; eISBN: 9780134671932. Sheldon, R. (2022, December). What is a truth table? TechTarget. https://www.techtarget.com/whatis/definition/truth-table

  • Minimizing Variable Scope in Java: Best Practices for Secure and Efficient Code

    This article explains the importance of minimizing variable scope in Java to enhance code readability, maintainability, and security. It highlights Java’s object-oriented approach, contrasts it with languages like C++, and provides examples of best practices, including encapsulation and controlled access through methods. Alexander S. Ricciardi November 20, 2024 In Java, the scope of a variable is the part of a program where the variable can be accessed (Mahrsee, 2024). The scope can be a Class scope, method scope, or block scope. Java does not have global variables like C++ does; global variables are variables that can be accessed from anywhere in the program. In other words, the variables have a global scope. Java inherently minimizes scope by encapsulating everything in classes. Java is a strictly object-oriented programming (OOP) language rather than a procedural one like C. On the other hand, C++ supports both paradigms, OPP and procedural programming. Anyhow, scope minimization is an approach with the goal of improved readability, better maintainability, and reduced chance of errors (Carter, 2021).  DCL53-J. SEI CERT Oracle Coding Standard for Java (CMU, n.d.) recommends minimizing the scope of variables to:“avoid common programming errors, improves code readability by connecting the declaration and actual use of a variable, and improves maintainability because unused variables are more easily detected and removed. It may also allow objects to be recovered by the garbage collector more quickly, and it prevents violations of DCL51-J. Do not shadow or obscure identifiers in subscopes .“ Minimizing the scope of a variable also adds a layer of security, as the variable is restricted to the context where it is needed. This reduces access, manipulation, or misuse by other parts of the program, limiting possible vulnerabilities. For example, in Java, declaring a class variable ‘private’ would restrict its scope within the class, preventing other classes from directly modifying or accessing it. However, if the variable needs to be accessed or modified, it can only be done through controlled methods, such as getters or setters, which encapsulate the variable or return a copy of it; additionally, they can implement an extra layer or validation or logic that ensures that the variable is properly utilized. Below is an example of how to apply scope minimization in Java can look like: public class Employee { // Private class variables to restrict access private String name; private double salary; // Constructor     public Employee(String name, double salary) { this.name = name; this.salary = salary; } // Getter for name (read-only access) public String getName() { return name; } // Getter and setter for salary with validation public double getSalary() { return salary; } public void setSalary(double salary) { if (salary > 0) { this.salary = salary; } else { throw new IllegalArgumentException("Salary must be greater than 0."); } } // Method to provide an increment with controlled logic public void applyBonus(double percentage) { if (percentage > 0 && percentage <= 20) { this.salary += this.salary * (percentage / 100); } else { throw new IllegalArgumentException("Bonus percentage must be between 0 and 20."); } }  // Display employee details public void printDetails() { System.out.println("Name: " + name); System.out.println("Salary: $" + salary); } }   public class Main { public static void main(String[] args) { // Create an Employee object Employee emp = new Employee("Alice", 50000); System.out.println("Initial Salary:"); emp.printDetails(); // Modify Salary emp.setSalary(55000); emp.applyBonus(10); System.out.println("\nUpdated Salary:"); emp.printDetails(); // Attempting to set an invalid salary System.out.println("\nInvalid salary (-10000):"); try { emp.setSalary(-10000); } catch (IllegalArgumentException e) { System.out.println(e.getMessage()); } } } Outputs: Name: Alice Salary: $50000.0 Updated Salary: Name: Alice Salary: $60500.0 Invalid salary (-10000): Salary must be greater than 0. To summarize, minimizing variable scope in Java code improves code readability, maintainability, and security by restricting access to where variables are needed most. Java is strictly object-oriented programming (OOP) language implying that it encapsulates data and variables within classes. This approach not only prevents unintended interactions and vulnerabilities but also aligns with best practices for efficient and secure programming.  References: Carter, K. (2021, February 10). Effective Java: Minimize The Scope of Local Variables. DEV Community. https://dev.to/kylec32/effective-java-minimize-the-scope-of-local-variables-3e87 CMU — Software Engineering Institute (n.d.) DCL53-J. Minimize the scope of variables. SEI CERT Oracle coding standard for Java. Carnegie Mellon University. Software Engineering Institute. Mahrsee, R. (2024, May 13). Scope of variables in Java. GeeksforGeeks. https://www.geeksforgeeks.org/variable-scope-in-java/

bottom of page