Tag: math

  • Proof Techniques in Predicate Logic

    Proof Techniques in Predicate Logic

    Having established both syntax and semantics, we now delve into the essential methods used to formally prove statements in predicate logic. Specifically, we’ll explore Natural Deduction and Hilbert-style systems, illustrating their application to real mathematical reasoning.

    Natural Deduction in Predicate Logic

    Natural deduction is a formal proof system that extends naturally from propositional logic, incorporating quantifiers. The fundamental idea is to use inference rules to construct proofs directly, step by step.

    Extension of Propositional Logic Rules

    Predicate logic inherits all propositional inference rules (such as conjunction, implication, negation) and adds rules for quantifiers.

    Introduction and Elimination Rules for Quantifiers

    Two new pairs of rules are introduced to handle quantifiers:

    • Universal Quantifier (\(\forall\)):
      • Introduction (\(\forall I\)): If you can derive \(\varphi(x)\) for an arbitrary element \(x\), you may conclude \(\forall x \varphi(x)\).
      • Elimination (\(\forall E\)): From \(\forall x \varphi(x)\), you may infer \(\varphi(a)\) for any specific element aa.
    • Existential Quantifier (\(\exists\)):
      • Introduction (\(\exists I\)): From \(\varphi(a)\) for a particular element \(a\), conclude \(\exists x \varphi(x)\).
      • Elimination (\(\exists E\)): If \(\exists x \varphi(x)\) holds, and from the assumption \(\varphi(a)\) (for an arbitrary \(a\)), you derive a conclusion \(\psi\) independent of \(a\), then you can conclude \(\psi\).

    Examples of Formal Proofs (Natural Deduction)

    Example: Prove the statement “If everyone loves pizza, then someone loves pizza.”

    Formally: \(\forall x \text{LovesPizza}(x) \rightarrow \exists y \text{LovesPizza}(y)\)

    Proof Sketch:

    1. Assume \(\forall x \text{LovesPizza}(x)\).
    2. From this assumption, choose any specific individual, say \(a\), then we have \(\text{LovesPizza}(a)\) by \(\forall E\).
    3. Now apply existential introduction (\(\exists I\)): From \(\{\text{LovesPizza}(a)\}\), conclude \(\exists y \text{LovesPizza}(y)\).

    This example illustrates the intuitive connection between universal and existential quantification.

    Hilbert-Style Proof Systems

    Hilbert-style proof systems build from axioms and inference rules, similar to propositional logic but augmented for quantification.

    Typical Axioms involving Quantifiers

    Hilbert-style systems typically add axioms to propositional logic to handle quantifiers, such as:

    • Universal Instantiation: \(\forall x \varphi(x) \rightarrow \varphi(a)\), where \(a\) is any term.
    • Existential Generalization: \(\varphi(a) \rightarrow \exists x \varphi(x)\).

    These axioms enable formal manipulation of quantified statements without explicitly referring to inference rules.

    Examples and Subtleties Introduced by Quantification

    Quantifiers add subtleties not present in propositional logic:

    • Order of quantifiers matters significantly: \(\forall x \exists y P(x,y)\) differs importantly from \(\exists y \forall x P(x,y)\).
    • Careful treatment of variables (bound vs. free) is crucial to avoid ambiguities.

    Real Mathematical Example

    Statement: “For every integer \(x\), there exists an integer \(y\) such that \(y = x + 1\).”

    Formally: \(\forall x \in \mathbb{Z}, \exists y (y = x + 1)\)

    Proof (Natural Deduction Sketch):

    • Consider an arbitrary integer \(x\).
    • Let \(y = x + 1\). Clearly, \(y\) is an integer.
    • Thus, \(\exists y(y = x + 1)\).
    • Since \(x\) was arbitrary, we conclude \(\forall x \exists y(y = x + 1)\).

    Complexities and Subtleties Introduced by Quantification

    Quantification introduces significant complexity, especially regarding variable scope and substitutions. Consider the formula: \(\forall x \exists y (x < y)\)

    It is intuitively clear that “for every integer, there is always a larger one.” Yet, rearranging quantifiers drastically changes the meaning: \(\exists y \forall x (y > x)\)

    Now, the statement claims a single largest integer, which is false in standard integer arithmetic.

    Exercises

    1. Using natural deduction, prove formally:

    \(\forall x (Even(x) \rightarrow \exists y (y = x + 2 \wedge Even(y)))\)

    1. Demonstrate why the following statement is not logically valid:

    \(\exists x \forall y (x > y)\)

    Provide a counterexample to illustrate your reasoning clearly.

    1. Translate and formally prove the following informal statement using natural deduction:
    • “If some integer is even, then not all integers are odd.”

    Summary

    Proof techniques in predicate logic enable precise mathematical reasoning through formalized methods. While natural deduction closely mirrors intuitive mathematical reasoning, Hilbert-style systems rely on powerful axioms and simple inference rules to build proofs systematically. Understanding quantification’s subtleties is critical for rigorous mathematical reasoning.

    Mastering these proof methods provides the essential skillset required to handle advanced mathematical and logical concepts effectively.

  • Semantics of Predicate Logic

    Semantics of Predicate Logic

    In the previous posts, we’ve discussed the syntax of predicate logic, outlining how terms, predicates, and formulas are formed. Now, we’ll explore semantics, explaining how meaning is formally assigned to these formulas.

    Structures and Interpretations

    The meaning of formulas in predicate logic is given by structures (or interpretations). Intuitively, a structure assigns concrete meanings to symbols, transforming abstract formulas into meaningful statements about specific objects.

    Formal Definition of a Structure

    Formally, a structure \(\mathcal{M}\) consists of:

    • A non-empty set \(D\), the domain of discourse.
    • An interpretation for each constant symbol, associating it with an element of \(D\).
    • An interpretation for each function symbol \(f\) of arity \(n\), assigning a function: \(f^{\mathcal{M}}: D^n \rightarrow D\)
    • An interpretation for each predicate symbol \(P\) of arity \(n\), assigning a subset of \(D^n\) indicating exactly when \(P\) holds true.

    Formally, we represent a structure as: \(\mathcal{M} = (D, I)\)

    where \(I\) gives the interpretations of all symbols.

    Examples of Mathematical Structures

    Example 1: Structure of integers with standard arithmetic

    • Domain \(D = \mathbb{Z}\).
    • Constants: integers like \(0, 1\).
    • Functions: Arithmetic operations (addition \(+\), multiplication \(\times\)).
    • Predicate “<” interpreted as: \(\{(x,y) \in \mathbb{Z}^2 \mid x < y \text{ as integers}\}\). Intuitively, the set defined by “<” includes all pairs of integers where the first integer is strictly less than the second.

    Truth in a Model

    Given a structure \(\mathcal{M}\), truth for formulas is defined recursively:

    Atomic Formulas

    An atomic formula \(P(t_1, \dots, t_n)\) is true in \(\mathcal{M}\) precisely when the tuple \((t_1^{\mathcal{M}}, \dots, t_n^{\mathcal{M}})\) lies in the subset defined by \(P\).

    This aligns intuitively with our everyday notion of truth: “Even(2)” is true because 2 indeed satisfies our usual definition of evenness.

    Extending Truth Definitions to Quantified Statements

    For quantified formulas, truth is defined as follows:

    • Universal Quantification (\(\forall x \varphi\)): True if, when we substitute any element from the domain for \(x\), the formula \(\varphi\) remains true.
      • Formally, \(\mathcal{M} \models \forall x \varphi\) means for every \(a \in D\), the formula \(\varphi[a/x]\) (formula \(\varphi\) with \(x\) replaced by \(a\)) is true in \(\mathcal{M}\).
    • Existential quantification (\(\exists x \varphi\)) is true if there exists at least one element in the domain for which \(\varphi\) becomes true when \(x\) is interpreted as this element.
      • Formally, \(\mathcal{M} \models \exists x \varphi\) means there is at least one \(a \in D\) such that \(\varphi[a/x]\) is true in \(\mathcal{M}\).

    In these definitions, \(\mathcal{M} \models \varphi\) means “formula \(\varphi\) is true in the structure \(\mathcal{M}\).”

    Models and Logical Validity

    • Model: A structure \(\mathcal{M}\) is a model for formula \(\varphi\) (written \(\mathcal{M} \models \varphi\)) if \(\varphi\) is true in \(\mathcal{M}\).
    • Satisfiability: A formula is satisfiable if it has at least one model.
    • Logical Validity: A formula is logically valid if it is true in every possible structure. We write this as: \(\models \varphi\)

    Intuitively, a logically valid formula represents a statement that holds universally, regardless of how its symbols are interpreted.

    Exercises

    1. Describe explicitly a structure that makes the formula \(\exists x (x^2 = 4) \) true.
    2. Consider the structure \(\mathcal{M}\) with domain \(\mathbb{N}\) and predicate \(Divides(x,y)\) meaning “\(x\) divides \(y\)”. Determine the truth value of:
      \(\forall x \exists y \; Divides(x,y)\)
    3. Give an example of a logically valid formula and one that is satisfiable but not logically valid. Clearly explain why each has the given property.

    This foundational understanding of semantics will enable us to move forward into exploring proof techniques and deeper logical analysis in predicate logic.

  • Error Propagation and Conditioning

    Error Propagation and Conditioning

    In numerical computations, errors can propagate through calculations, potentially leading to significant inaccuracies in results. Understanding how errors propagate and how the conditioning of a problem affects numerical stability is crucial for designing robust numerical algorithms. In this post, I will discuss error propagation and the concept of conditioning in numerical problems.

    Error Propagation

    Errors in numerical computations arise from round-off errors, truncation errors, and uncertainties in input data. These errors can propagate through subsequent calculations, amplifying or dampening their effects depending on the nature of the problem and the algorithm used.

    Types of Error Propagation

    • Additive Propagation: When independent errors accumulate linearly through computations. For example, in summing a sequence of numbers, each with a small error, the total error grows proportionally.
    • Multiplicative Propagation: When errors are scaled through multiplication, leading to potentially exponential growth in error magnitude.
    • Differential Propagation: When small input errors lead to large output variations, particularly in functions with steep gradients.

    Example of Error Propagation

    Consider computing a function using finite differences: \(f'(x) \approx \frac{f(x+h) – f(x)}{h}\)

    If \(f(x)\) is obtained from measurements with small uncertainty, then errors in \(f(x+h)\) and \(f(x)\) propagate through the division by \(h\), potentially amplifying inaccuracies when \(h\) is very small.

    Conditioning of a Problem

    The conditioning of a numerical problem refers to how sensitive its solution is to small changes in the input. A well-conditioned problem has solutions that change only slightly with small perturbations in the input, whereas an ill-conditioned problem exhibits large variations in output due to small input changes.

    Measuring Conditioning: The Condition Number

    For a function \(f(x)\), the condition number \(\kappa\) measures how small relative errors in the input propagate to relative errors in the output: \(\kappa = \left| \frac{x}{f(x)} \frac{df(x)}{dx} \right|\)

    For matrix problems, the condition number is defined in terms of the matrix norm: \(\kappa(A) = \| A \| \cdot \| A^{-1} \|\)

    A high condition number indicates an ill-conditioned problem where small errors in input can lead to large deviations in the solution.

    Example of an Ill-Conditioned Problem

    Solving a nearly singular system of linear equations: \(Ax = b\)

    If the matrix \(A\) has a very large condition number, small perturbations in \(b\) or rounding errors can lead to vastly different solutions \(x\), making numerical methods unreliable.

    Visualizing Conditioning with a Function

    To illustrate the concept of conditioning, consider the function \(f(x) = e^x\). The sensitivity of this function to input changes depends on the value of \(x\):

    • Well-Conditioned at \(x = 0\): Small changes in \(x\) (e.g., \(x = -0.1\) to \(x = 0.1\)) lead to small changes in \(f(x)\), indicating a well-conditioned problem.
    • Ill-Conditioned at \(x = 10\): Small changes in \(x\) (e.g., \(x = 9.9\) to \(x=10.1\)) cause large changes in \(f(x)\), illustrating an ill-conditioned problem.

    The following figure visually demonstrates this concept, where small perturbations in the input lead to significantly different outputs in ill-conditioned cases while remaining controlled in well-conditioned ones.

    Stability and Conditioning

    While conditioning is an inherent property of a problem, stability refers to how well an algorithm handles errors. A stable algorithm minimizes error amplification even when solving an ill-conditioned problem.

    Example of Algorithm Stability

    For solving linear systems \(Ax = b\), Gaussian elimination with partial pivoting is more stable than straightforward Gaussian elimination, as it reduces the impact of round-off errors on ill-conditioned systems.

    Conclusion

    Understanding error propagation and problem conditioning is essential for reliable numerical computations. While some problems are inherently sensitive to input errors, choosing numerically stable algorithms helps mitigate these effects. In the next post, I will discuss techniques for controlling numerical errors and improving computational accuracy.

  • Syntax of Predicate Logic

    Syntax of Predicate Logic

    In the previous post, we saw why propositional logic is not sufficient to express general mathematical statements, and why predicate logic is required.
    In this post, we’ll explore the formal syntax of predicate logic, detailing how statements are constructed using terms, predicates, quantifiers, and formulas.

    Terms

    In predicate logic, terms represent objects within a given domain. Terms can be:

    • Variables (usually denoted by lowercase letters such as \(x\),\(y\),\(z\).
    • Constants (symbolizing specific, fixed objects, e.g., \(0\),\(1\),\(a\),\(b\).
    • Function symbols applied to terms (e.g., \(f(x)\),\(g(x,y)\)).

    How Terms Are Built

    Formally, terms are constructed recursively:

    1. Any variable is a term.
    2. Any constant symbol is a term.
    3. If \(f\) is an \(n\)-ary function symbol and \(t_1, t_2, \dots, t_n\) are terms, then \(f(t_1, t_2, \dots, t_n)\) is also a term.

    Examples of Terms:

    • Variables: \(x\),\(y\)
    • Constants: \(0\),\(1\)
    • Function terms: \(f(x)\), \(+(x,y)\), \(\sin(z)\)

    Predicates

    Predicates express properties of terms or relations between them. A predicate \(P\) applied to terms \(t_1, t_2, \dots, t_n\) is written as \(P(t_1, t_2, \dots, t_n)\).

    Examples:

    • \(Even(x)\) — “\(x\) is even.”
    • \(Prime(y)\) — “\(y\) is prime.”
    • \(Greater(x,y)\) — “\(x\) is greater than \(y\).”

    Predicates are crucial in clearly describing properties and relationships in logic.

    Quantifiers

    Quantifiers allow us to express statements involving generality or existence:

    • Universal quantifier (\(\forall\)): means “for all” or “every.”
      • Example: \(\forall x, Even(x) \vee Odd(x)\) — “Every number is either even or odd.”
    • Existential quantifier (\(\exists\)): means “there exists.”
      • Example: \(\exists y, Prime(y) \wedge y > 2\) — “There exists some \(y\) that is prime and greater than 2.”

    Quantifiers bind variables within their scope, turning predicates into full propositions.

    Forming Formulas

    Predicate logic uses the following rules to form formulas:

    1. Atomic Formulas: If \(P\) is an \(n\)-ary predicate and \(t_1, t_2, \dots, t_n\) are terms, \(P(t_1, t_2, \dots, t_n)\) is an atomic formula.
    2. Complex Formulas:
      • If \(\varphi\) and \(\psi\) are formulas, then so are \((\varphi \wedge \psi)\), \((\varphi \vee \psi)\), \((\varphi \rightarrow \psi)\), \((\varphi \leftrightarrow \psi)\), \(\neg \varphi\).
      • If \(\varphi\) is a formula and \(x\) a variable, then \(\forall x \varphi\) and \(\exists x \varphi\) are also formulas.

    Well-Formed Formulas (WFFs)

    Formulas constructed following the above rules are called well-formed formulas. They represent meaningful mathematical statements.

    Examples:

    • \(\forall x (Prime(x) \rightarrow x > 1)\)
    • \(\exists y (y^2 = 2)\)
    • \(\forall x \exists y (y > x)\)

    Exercises

    1. Identify the terms, predicates, and quantifiers in the following formula: \(\forall x (Even(x) \rightarrow \exists y (y = x + 2 \wedge Even(y)))\)
    2. Construct a predicate logic formula stating: “Every positive integer has a successor.”
    3. Are the following expressions terms or formulas? Explain why.
      • \(f(x,g(y))\)
      • \(P(x,y,z)\)
      • \(\forall x, g(x)\)
    4. Write a predicate logic formula to express: “There exists a number that is less than all other numbers.”

    This post lays out the precise structure needed to form meaningful mathematical statements, setting the stage for exploring semantics and proofs in predicate logic.

  • Introduction to Predicate Logic

    Introduction to Predicate Logic

    Why Predicate Logic?

    In our journey through formal mathematics, we’ve explored propositional logic—a powerful yet limited tool. Propositional logic allows us to reason about the truth of statements built from simpler ones using logical connectives. However, it falls short when we need to express statements about generalizations, properties, or specific relationships involving objects.

    Predicate logic (also known as first-order logic) extends propositional logic by enabling precise expressions involving objects, their properties, and relationships between them. It’s an essential tool in mathematics, computer science, and logic because it captures a richer class of statements we frequently encounter.

    Motivation and Limitations of Propositional Logic Revisited

    Consider the statement:

    “All prime numbers greater than 2 are odd.”

    Using propositional logic alone, this statement can’t be properly captured. While propositional logic handles statements like “A number is prime,” it doesn’t support quantifying over numbers or expressing relationships like “greater than.”

    This is where predicate logic comes into play, providing the necessary expressive power.

    The Need for Expressing Generalizations, Properties, and Relationships

    In mathematics and formal reasoning, we regularly encounter statements such as:

    • “For every integer \(x\), \(x^2 \ge 0\).”
    • “There exists a prime number greater than 1000.”
    • “Every continuous function on a closed interval is bounded.”

    Predicate logic gives us the tools—quantifiers, predicates, and terms—to rigorously state and analyze these types of claims.

    What is Predicate Logic (First-Order Logic)?

    Predicate logic enhances propositional logic by introducing:

    • Terms: Variables, constants, and functions denoting mathematical objects.
    • Predicates: Properties or relationships about objects.
    • Quantifiers: Statements of generality or existence (“for all,” “there exists”).

    These new elements significantly expand the expressive capability of logic, making it possible to formalize complex mathematical reasoning fully.

    Historical Context and Significance

    Predicate logic was formalized in the late 19th and early 20th centuries by logicians like Gottlob Frege, Bertrand Russell, and Alfred Tarski. It quickly became foundational, underpinning modern mathematics through the axiomatization of set theory.

    In computer science, predicate logic provides the theoretical backbone for automated reasoning, databases, logic programming (like Prolog), and formal verification of software and hardware.

    Comparison with Propositional Logic

    Propositional logic deals with statements that can only be true or false without internal structure. Predicate logic, on the other hand, deals explicitly with the internal structure of statements, using quantification and detailed internal references to objects and their properties.

    For example, propositional logic treats the statement “x is prime” as a simple true or false proposition. Predicate logic allows us to clarify precisely what “x” is and to reason explicitly about varying values of “x.”

    Example Illustrating the Difference:

    • Propositional logic:
      • “P: 2 is prime.”
      • “Q: 3 is odd.”
    • Predicate logic:
      • “\(\forall x (\text{Prime}(x) \land x > 2 \rightarrow \text{Odd}(x))\)”

    Here, predicate logic explicitly quantifies over numbers and specifies conditions clearly, something impossible in propositional logic.

    Exercises

    1. Provide examples of three mathematical statements that can be expressed clearly in predicate logic but not in propositional logic.
    2. Can predicate logic express the following statement? Explain why or why not:
      • “There are infinitely many prime numbers.”
    3. Rewrite the following informal statement in formal predicate logic:
      • “Every integer divisible by 4 is even.”

    This foundational introduction sets the stage for exploring the syntax, semantics, and proof methods of predicate logic in greater detail.

  • Essential Set-Theoretic Foundations

    Essential Set-Theoretic Foundations

    Before we delve deeper into predicate logic, it’s important to clarify a few essential concepts from set theory. Predicate logic itself relies on some basic set-theoretic notions for its formal definitions and interpretations. This short introduction provides the minimal set theory you’ll need.

    Introduction to Sets

    A set is a collection of distinct objects, called elements, considered as a single entity.

    • Examples:
      • Set of integers \(\mathbb{Z} = \{\dots, -2, -1, 0, 1, 2, \dots\}\)
      • Set of real numbers \(\mathbb{R}\)

    Membership and Subsets

    • Membership: If an object aa belongs to a set \(A\), we write \(a \in A\).
      • Example: \(3 \in \mathbb{Z}\), \(\pi \in \mathbb{R}\).
    • Subsets: A set \(A\) is a subset of another set \(B\) (written \(A \subseteq B\)) if every element of \(A\) is also in \(B\).
      • Example: The set of integers \(\mathbb{Z}\) is a subset of real numbers \(\mathbb{R}\), written as \(\mathbb{Z} \subseteq \mathbb{R}\).

    Cartesian Product

    The Cartesian product \(A \times B\) of sets \(A\) and \(B\) is the set of all ordered pairs where the first element is from \(A\) and the second from \(B\): \(A \times B = \{(a,b) \mid a \in A, b \in B\}\)

    • Example: If \(A = \{1,2\}\) and \(B = \{x,y\}\), then: \(A \times B = \{(1,x), (1,y), (2,x), (2,y)\}\)

    Relations and Functions

    • A relation between sets \(A\) and \(B\) is a subset of their Cartesian product \(A \times B\).
      • Example: “Less than” relation on integers, represented as: \(\{(x,y) \mid x,y \in \mathbb{Z}, x<y\}\)
    • A function from a set \(A\) to set \(B\) assigns exactly one element of \(B\) to each element of \(A\).
      • Formally: \(f: A \rightarrow B\).
      • Example: The square function on integers \(f(x)=x^2\) takes an integer \(x\) and maps it to its square in \(\mathbb{Z}\).

    Relations as Subsets

    In predicate logic, predicates are interpreted as subsets of Cartesian products. For instance, the predicate “\(<\)” (less than) on integers is the subset of all integer pairs \((x,y)\) where \(x<y\).


    Exercises

    1. Define the set \(A \times A\) explicitly, given \(A = \{0,1\}\).
    2. Let \(A = \{1,2,3\}\). Write explicitly the subset defined by the predicate “greater than.”
    3. Given sets \(A=\{a,b\}\), \(B=\{1,2\}\), and \(C=\{x\}\), determine \(A\times B\times C\).

    These basic set-theoretic concepts are foundational to clearly understanding the semantics of predicate logic, enabling us to rigorously discuss structures and interpretations in logic.

  • Sources of Numerical Errors (Round-Off, Truncation, Stability)

    Sources of Numerical Errors (Round-Off, Truncation, Stability)

    Numerical computations inherently involve errors due to the limitations of representing and manipulating numbers in a finite-precision system. Understanding the different types of numerical errors is crucial for developing stable and accurate numerical algorithms. In this post, I will discuss three primary sources of numerical errors: round-off errors, truncation errors, and numerical stability.

    Round-Off Errors

    Round-off errors arise from the finite precision used to store numbers in a computer. Since most real numbers cannot be represented exactly in floating-point format, they must be approximated, leading to small discrepancies.

    Causes of Round-Off Errors

    • Finite Precision Representation: Floating-point numbers are stored using a limited number of bits, which results in small approximations for many numbers. For example, the decimal 0.1 cannot be exactly represented in binary, leading to a small error.
    • Arithmetic Operations: When performing arithmetic on floating-point numbers, small errors can accumulate.
    • Conversion Between Number Bases: Converting between decimal and binary introduces small inaccuracies due to repeating fractions in binary representation.

    Example of Round-Off Error

    Consider summing 0.1 ten times in floating-point arithmetic:

    import numpy as np
    print(sum([0.1] * 10) == 1.0)  # Expected True, but may return False due to precision error
    print(sum([0.1] * 10))  # Output: 0.9999999999999999

    This error occurs because 0.1 is stored as an approximation, and the accumulation of small errors results in a slight deviation from 1.0.

    Catastrophic Cancellation

    Catastrophic cancellation is a specific type of round-off error that occurs when subtracting two nearly equal numbers. Since the leading significant digits cancel out, the result has significantly reduced precision, often leading to large relative errors.

    For example, consider:

    import numpy as np
    x = np.float32(1.0000001)
    y = np.float32(1.0000000)
    print(x - y)  # Loss of significant digits due to cancellation

    If the subtraction results in a number much smaller than the original values, relative errors become large, reducing numerical accuracy. This type of error is especially problematic in iterative methods and when computing small differences in large values.

    Truncation Errors

    Truncation errors occur when an infinite or continuous mathematical process is approximated by a finite or discrete numerical method. These errors arise from simplifying mathematical expressions or using approximate numerical techniques.

    Causes of Truncation Errors

    • Numerical Differentiation and Integration: Approximating derivatives using finite differences or integrals using numerical quadrature leads to truncation errors.
    • Series Expansions: Many functions are approximated using truncated Taylor series expansions, leading to errors that depend on the number of terms used.
    • Discretization of Continuous Problems: Solving differential equations numerically involves discretizing time or space, introducing truncation errors.

    Example of Truncation Error

    Approximating the derivative of \(f(x) = \sin(x)\) using the finite difference formula: \(f'(x) \approx \frac{f(x+h) – f(x)}{h}\)

    For small h, this formula provides an approximation, but it is not exact because it ignores higher-order terms in the Taylor series expansion.

    Numerical Stability

    Numerical stability pertains to an algorithm’s sensitivity to small perturbations, such as round-off or truncation errors, during its execution. A numerically stable algorithm ensures that such small errors do not grow uncontrollably, thereby maintaining the accuracy of the computed results.

    Causes of Numerical Instability

    • Ill-Conditioned Problems: Some problems are highly sensitive to small changes in input. For example, solving a nearly singular system of linear equations can magnify errors, making the solution highly inaccurate.
    • Unstable Algorithms: Some numerical methods inherently amplify errors during execution. For instance, certain iterative solvers may diverge rather than converge if they are not properly designed.
    • Accumulation of Round-Off Errors in Repeated Computations: When an algorithm involves repeated arithmetic operations, small round-off errors may compound over time, leading to significant deviations from the true result.

    Example of Numerical Instability: Iterative Methods

    Consider an iterative method designed to approximate the square root of a number. If the method is not properly formulated, small errors in each iteration can accumulate, leading to divergence from the true value rather than convergence. This behavior exemplifies numerical instability, as the algorithm fails to control the propagation of errors through successive iterations.

    Conclusion

    Understanding numerical errors is crucial for designing reliable computational methods. While round-off errors stem from finite precision representation, truncation errors arise from approximations in numerical methods. Stability plays a key role in determining whether small errors will remain controlled or grow exponentially. In the next post, I will discuss techniques to mitigate these errors and improve numerical accuracy.

  • Limitations of Propositional Logic

    Limitations of Propositional Logic

    In the previous posts, we’ve extensively discussed propositional logic, exploring its syntax, semantics, and proof techniques. Propositional logic is powerful and foundational; however, it has significant limitations in its expressiveness. Recognizing these limitations is essential to understanding why more advanced logical systems, such as predicate logic, are necessary.

    Expressiveness Limitations

    Propositional logic deals exclusively with entire statements (propositions) as indivisible units. It does not analyze the internal structure of these statements. Consequently, propositional logic cannot express statements that involve quantification, generalizations, or relationships between individual objects. It lacks the capability to handle statements that refer explicitly to particular individuals or properties that objects can possess.

    This lack of expressiveness restricts propositional logic to very simple assertions, leaving many important mathematical and philosophical statements beyond its reach. To overcome this, predicate logic introduces the concepts of variables, quantifiers (such as “for all” and “there exists”), predicates, and functions, allowing for richer and more precise expression of complex ideas.

    Examples of Statements Propositional Logic Cannot Express

    To illustrate these limitations clearly, consider the following examples that propositional logic cannot adequately capture:

    1. Generalizations:
      • “All humans are mortal.”
      • “Every even number greater than 2 is the sum of two primes.”
      Propositional logic cannot represent general statements involving “all” or “every,” since it cannot quantify over a set or category of objects.
    2. Existential Statements:
      • “There exists an integer solution to the equation \(x^2 – 2 = 0\).”
      • “Some cats are black.”
      Statements involving existence or nonexistence of certain elements are beyond the scope of propositional logic since it has no concept of individual objects or variables.
    3. Relational Statements:
      • “Alice is taller than Bob.”
      • “Paris is the capital of France.”
      These statements explicitly describe relationships between specific entities or individuals. Propositional logic treats such statements as atomic and provides no way to express the underlying structure or relationships explicitly.

    In propositional logic, each of these statements would have to be represented by a single, unanalyzable symbol, losing all internal structural information.

    Practical Implications

    The expressiveness limitations of propositional logic have practical consequences, particularly in areas such as mathematics, computer science, and artificial intelligence.

    • Complex Mathematical Reasoning: Propositional logic is insufficient for expressing and reasoning about even basic algebraic or geometric properties explicitly. For example, expressing and proving statements about arithmetic or geometric relationships requires the ability to quantify and reason about specific objects or numbers.
    • Logical Reasoning in Computer Science: In database queries, rule-based systems, and software verification, propositional logic quickly reaches its limits. Queries such as “List all employees who have a salary greater than their manager” or verifying software correctness with quantified conditions necessitate the richer structure provided by predicate logic.

    These practical scenarios underscore why moving beyond propositional logic is not just beneficial but essential for rigorous reasoning in more complex domains.

    Transition to Predicate Logic

    To address these limitations, we introduce predicate logic, also known as first-order logic. Predicate logic extends propositional logic by allowing:

    • Variables and Quantification: Variables represent individuals or objects, and quantifiers such as “for all” (\(\forall\)) and “there exists” (\(\exists\)) allow us to state general or existential claims explicitly.
    • Predicates and Relations: These represent properties of objects or relationships between objects, allowing for structured expressions such as “\(x\) is mortal” or “\(x\) is greater than \(y\).”
    • Functions: Functions permit explicit expression of operations on objects, enhancing the expressiveness even further.

    For instance, the statement “All humans are mortal” can be precisely expressed in predicate logic as:

    \[\forall x (H(x) \rightarrow M(x))\]

    meaning “for every object \(x\), if \(x\) is human (\(H(x)\)), then \(x\) is mortal (\(M(x)\)).”

    In the upcoming posts, we will dive deeply into predicate logic, exploring its syntax, semantics, proof methods, and applications. This advancement will enable us to capture more sophisticated mathematical and philosophical concepts and significantly expand our logical toolkit.

  • Proof Strategies and Advanced Techniques

    Proof Strategies and Advanced Techniques

    In previous posts of this thread, we introduced formal proof techniques in propositional logic, discussing natural deduction, Hilbert-style proofs, and the fundamental concepts of soundness and completeness. Now, we turn to advanced proof strategies that enhance our ability to construct and analyze proofs efficiently. In particular, we will explore proof by contradiction and resolution, two powerful techniques frequently used in mathematics, logic, and computer science.

    Proof by Contradiction

    Proof by contradiction (also known as reductio ad absurdum) is a fundamental method in mathematical reasoning. The core idea is to assume the negation of the statement we wish to prove and show that this leads to a contradiction. If the assumption results in an impossible situation, we conclude that our original statement must be true.

    Formalization in Propositional Logic

    Proof by contradiction can be expressed formally as:

    \((\neg P \vdash (Q \land \neg Q)) \vdash P\).

    This means that if assuming \(\neg P\) leads to a contradiction (\(Q\land \neg Q\)), then \(\neg P\) must be false, so \(P\) holds. This formulation captures the essence of proof by contradiction: by demonstrating that an assumption results in a logical impossibility, we conclude that the assumption must have been incorrect.In propositional logic, suppose we wish to prove a formula \(P\).

    Proof by contradiction consists of the following steps:

    1. Assume \(\neg P\) (i.e., assume that \(P\) is false).
    2. Using inference rules, derive a contradiction—i.e., derive a formula of the form \(Q \land \neg Q\), where \(Q\) is some proposition.
    3. Since a contradiction is always false, the assumption \(\neg P\) must also be false.
    4. Therefore, \(P\) must be true.

    This follows from the principle of the excluded middle in classical logic, which states that for any proposition \(P\), either \(P\) or \(\neg P\) must be true.

    Example in Propositional Logic

    Let us prove that if \(P \rightarrow Q\) and \(\neg Q\) hold, then \(\neg P\) must also hold:

    1. Assume the negation of the desired conclusion: Suppose \(P\) is true.
    2. Use the given premises:
      • We know that \(P \rightarrow Q\) is true.
      • By Modus Ponens, since \(P\) is true, we must have \(Q\) as true.
      • However, we are also given that \(\neg Q\) is true, meaning that \(Q\) must be false.
    3. Contradiction: Since \(Q\) is both true and false, we reach a contradiction.
    4. Conclusion: Since our assumption \(P\) led to a contradiction, we conclude that \(\neg P\) must be true.

    This establishes the validity of Modus Tollens: If \(P→Q\) is true and \(\neg Q\) is true, then \(\neg P\) is also true.

    Applied Example

    To illustrate how proof by contradiction works in an applied setting, consider proving that \(2\sqrt{2}\) is irrational.

    We define the following propositions:

    • \(R\): “\(2\sqrt{2}\) is irrational.”
    • \(E_p\): “\(p\) is even.”
    • \(E_q\): “\(q\) is even.”
    1. Assume the opposite: Suppose that \(R\) is false, meaning \(2\sqrt{2}\) is rational and can be written as a fraction \(\frac{p}{q}\) in lowest terms, where \(p\) and \(q\) are integers with no common factors other than \(1\).
    2. Square both sides: \(2 = \frac{p^2}{q^2}\), which implies \(2q^2 = p^2\).
    3. Conclude that \(p^2\) is even: Since \(2q^2 = p^2\), \(p^2\) is divisible by \(2\), which means \(p\) must also be even. That is, \(E_p\) holds.
    4. Write \(p\) as \(p=2k\) for some integer \(k\), then substitute: \(2q^2 = (2k)^2 = 4k^2\), so \(q^2 = 2k^2\).
    5. Conclude that \(q^2\) is even, which implies that \(q\) is even, i.e., \(E_q\) holds.
    6. Contradiction: Both \(p\) and \(q\) are even, contradicting the assumption that \(\frac{p}{q}\) was in lowest terms. That is, we have derived \(E_p \land E_q\), which contradicts the assumption that \(\neg (E_p \land E_q)\) held under \(R\).
    7. Conclusion: Since assuming \(\neg R\) led to a contradiction, we conclude that \(R\) must be true. Therefore, \(2\sqrt{2}\) is irrational.

    Proof by contradiction is a widely used technique, particularly in theoretical mathematics, number theory, and logic.

    Resolution

    Resolution is a proof technique commonly used in automated theorem proving and logic programming. It is based on the idea of refutation: to prove that a statement is true, we assume its negation and derive a contradiction using a systematic process.

    Resolution operates within conjunctive normal form (CNF), where statements are expressed as a conjunction of disjunctions (i.e., sets of clauses). The resolution rule allows us to eliminate variables step by step to derive contradictions.

    The Resolution Rule:

    If we have two clauses:

    • \(P \lor A\)
    • \(\neg P \lor B\)

    We can resolve them to infer a new clause:

    • \(A \lor B\)

    By eliminating \(P\), we combine the remaining parts of the clauses.

    Example:

    Suppose we have the following premises:

    1. “Alice studies or Bob is happy.” \(S \lor H\)
    2. “Alice does not study or Bob goes to the gym.” \(\neg S \lor G\)
    3. “Bob does not go to the gym.” \(\neg G\)

    We wish to determine whether Bob is happy (i.e., prove \(H\)).

    Step 1: Apply Resolution

    • From (2) and (3), resolve on \(G\): \(\neg S \lor G\) and \(\neg G\) produce \(\neg S\).
    • From (1) and \(\neg S\), resolve on \(S\): \(S \lor H\) and \(\neg S\) produce \(H\).

    Thus, we have derived \(H\), proving that Bob is happy.

    Summary

    • Proof by contradiction is a classical method that assumes the negation of a statement and derives a contradiction, proving that the statement must be true.
    • Resolution is a formal proof technique used in logic and computer science, particularly in automated reasoning.

    Both methods are powerful tools in mathematical logic, each serving distinct purposes in different areas of theoretical and applied reasoning.

    Next Steps

    Now that we have covered fundamental and advanced proof techniques in propositional logic, in the next post of this thread I will talk about the Limitations of Propositional Logic.

  • Representation of Numbers in Computers

    Representation of Numbers in Computers

    Computers handle numbers differently from how we do in mathematics. While we are accustomed to exact numerical values, computers must represent numbers using a finite amount of memory. This limitation leads to approximations, which can introduce errors in numerical computations. In this post, I will explain how numbers are stored in computers, focusing on integer and floating-point representations.

    Integer Representation

    Integers are stored exactly in computers using binary representation. Each integer is stored in a fixed number of bits, commonly 8, 16, 32, or 64 bits. The two primary representations of integers are:

    The Binary System

    Computers operate using binary (base-2) numbers, meaning they represent all values using only two digits: 0 and 1. Each digit in a binary number is called a bit. The value of a binary number is computed similarly to decimal (base-10) numbers but using powers of 2 instead of powers of 10.

    For example, the binary number 1101 represents: \[(1 \times 2^3)+(1 \times 2^2)+(0 \times 2^1)+(1 \times 2^0)=8+4+0+1=13\]

    Similarly, the decimal number 9 is represented in binary as 1001.

    Unsigned Integers

    Unsigned integers can only represent non-negative values. A n-bit unsigned integer can store values from 0 to 2^n - 1. For example, an 8-bit unsigned integer can represent values from 0 to 255 (2^8 - 1).

    Signed Integers and Two’s Complement

    Signed integers can represent both positive and negative numbers. The most common way to store signed integers is two’s complement, which simplifies arithmetic operations and ensures unique representations for zero.

    In two’s complement representation:

    • The most significant bit (MSB) acts as the sign bit (0 for positive, 1 for negative).
    • Negative numbers are stored by taking the binary representation of their absolute value, inverting the bits, and adding 1.

    For example, in an 8-bit system:

    • +5 is represented as 00000101
    • -5 is obtained by:
      1. Writing 5 in binary: 00000101
      2. Inverting the bits: 11111010
      3. Adding 1: 11111011

    Thus, -5 is stored as 11111011.

    One of the key advantages of two’s complement is that subtraction can be performed as addition. For instance, computing 5 - 5 is the same as 5 + (-5), leading to automatic cancellation without requiring separate subtraction logic in hardware.

    The range of a n-bit signed integer is from -2^(n-1) to 2^(n-1) - 1. For example, an 8-bit signed integer ranges from -128 to 127.

    Floating-Point Representation

    Most real numbers cannot be represented exactly in a computer due to limited memory. Instead, they are stored using the IEEE 754 floating-point standard, which represents numbers in the form: \[x = (-1)^s \times M \times 2^E\]

    where:

    • s is the sign bit (0 for positive, 1 for negative).
    • M (the mantissa) stores the significant digits.
    • E (the exponent) determines the scale of the number.

    How the Mantissa and Exponent Are Stored and Interpreted

    The mantissa (also called the significand) and exponent are stored in a structured manner to ensure a balance between precision and range.

    • Mantissa (Significand): The mantissa represents the significant digits of the number. In IEEE 754, the mantissa is stored in normalized form, meaning that the leading bit is always assumed to be 1 (implicit bit) and does not need to be stored explicitly. This effectively provides an extra bit of precision.
    • Exponent: The exponent determines the scaling factor for the mantissa. It is stored using a bias system to accommodate both positive and negative exponents.
      • In single precision (32-bit): The exponent uses 8 bits with a bias of 127. This means the stored exponent value is E + 127.
      • In double precision (64-bit): The exponent uses 11 bits with a bias of 1023. The stored exponent value is E + 1023.

    For example, the decimal number 5.75 is stored in IEEE 754 single precision as:

    1. Convert to binary: 5.75 = 101.11_2
    2. Normalize to scientific notation: 1.0111 × 2^2
    3. Encode:
      • Sign bit: 0 (positive)
      • Exponent: 2 + 127 = 129 (binary: 10000001)
      • Mantissa: 01110000000000000000000 (without the leading 1)

    Final representation in binary: 0 10000001 01110000000000000000000

    Special Floating-Point Values: Inf and NaN

    IEEE 754 also defines special representations for infinite values and undefined results:

    • Infinity (Inf): This occurs when a number exceeds the largest representable value. It is represented by setting the exponent to all 1s and the mantissa to all 0s:
      • Positive infinity: 0 11111111 00000000000000000000000
      • Negative infinity: 1 11111111 00000000000000000000000
    • Not-a-Number (NaN): This is used to represent undefined results such as 0/0 or sqrt(-1). It is identified by an exponent of all 1s and a nonzero mantissa:
      • NaN: x 11111111 ddddddddddddddddddddddd (where x is the sign bit and d is any nonzero value in the mantissa)

    Subnormal Numbers

    Subnormal numbers (also called denormalized numbers) are a special category of floating-point numbers used to represent values that are too small to be stored in the normal format. They help address the issue of underflow, where very small numbers would otherwise be rounded to zero.

    Why Are Subnormal Numbers Needed?

    In standard IEEE 754 floating-point representation, the smallest normal number occurs when the exponent is at its minimum allowed value. However, values smaller than this minimum would normally be rounded to zero, causing a loss of precision in numerical computations. To mitigate this, IEEE 754 defines subnormal numbers, which allow for a gradual reduction in precision rather than an abrupt transition to zero.

    How Are Subnormal Numbers Represented?

    A normal floating-point number follows the form: \[x = (-1)^s \times (1 + M) \times 2^E\]

    where 1 + M represents the implicit leading bit (always 1 for normal numbers), and E is the exponent.

    For subnormal numbers, the exponent is set to the smallest possible value (E = 1 - bias), and the leading 1 in the mantissa is no longer assumed. Instead, the number is stored as: \[x = (-1)^s \times M \times 2^{1 – \text{bias}}\]

    This means subnormal numbers provide a smooth transition from the smallest normal number to zero, reducing sudden underflow errors.

    Example of a Subnormal Number

    In IEEE 754 single-precision (32-bit) format:

    • The smallest normal number occurs when E = 1 (after subtracting bias: E - 127 = -126).
    • The next smaller numbers are subnormal, where E = 0, and the mantissa gradually reduces towards zero.

    For example, a subnormal number with a small mantissa could look like:

    0 00000000 00000000000000000000001
    

    This represents a very small positive number, much closer to zero than any normal number.

    Limitations of Subnormal Numbers

    • They have reduced precision, as the leading 1 bit is missing.
    • Operations involving subnormal numbers are often slower on some hardware due to special handling.
    • In extreme cases, they may still lead to precision loss in calculations.

    Precision and Limitations

    Floating-point representation allows for a vast range of values, but it comes with limitations:

    • Finite Precision: Only a finite number of real numbers can be represented.
    • Rounding Errors: Some numbers (e.g., 0.1 in binary) cannot be stored exactly, leading to small inaccuracies.
    • Underflow and Overflow: Extremely small numbers may be rounded to zero (underflow), while extremely large numbers may exceed the maximum representable value (overflow).

    Example: Floating-Point Approximation

    Consider storing 0.1 in a 32-bit floating-point system. Its binary representation is repeating, meaning it must be truncated, leading to a slight approximation error. This small error can propagate in calculations, affecting numerical results.

    Conclusion

    Understanding how numbers are represented in computers is crucial in computational physics and numerical methods. In the next post, I will explore sources of numerical errors, including truncation and round-off errors, and how they impact computations.