Image







Nayumi

Mathematics is difficult because there are a lot of difficult terms. If you come across terms you don't know, you might want to take a look at this glossary.

Kaya

When you are reading an article, it is convenient to open this glossary in a separate window (tab).


A
Absolute value
 The absolute value of the real number \( a \) is equal to \( a \) if \( a \) is greater than or equal to 0, and equal to \( -a \) if \( a \) is less than 0. The absolute value of \( a \) is denoted by \( |a| \) .

Angle
 A sharp part between two rays \( l \) and \( m \) from a single point \( O \) , called a vertex. The two rays \( l \) and \( m \) are called the sides of the angle. This angle is denoted by \( \angle O \) or \( \angle POQ \) using the point \( P \) on \( l \) and the point \( Q \) on \( m \) . The angle have an inside and an outside. The smaller one is the inside, and the larger one is the outside.
 If angles \( \alpha \) and \( \beta \) share a vertex and one side, and the other two sides are on a straight line, they are called adjacent supplementary angles. At this time, if the angles \( \alpha \) and \( \beta \) are equal, they are called right angles.

Antiderivative
 Given a function \( f(x) \) , any function \( y = F (x) \) such that \( \frac{dy}{dx} = f (x) \) is an antiderivative of \( f(x) \).

Arc
 The part of a circle with two points \( P \) and \( Q \) as the endpoints. The line segment joining \( P \) to \( Q \) is called a chord. Also, the angle obtained by drawing a line segment from \( P \) and \( Q \) to the center of the circle \( O \) and measuring \( \angle POQ \) on the side with the arc is called the central angle of this arc.

Arithmetic mean
 It is also called mean or average. The arithmetic mean of the two numbers \( a, b \) is given by \( \frac{a + b}{2} \).

Asymptote
 A straight line \( l \) is an asymptote to a curve if \( l \) gets as close as possible to the curve, but does not intersect with each other.

Axioms
 A proposition that is stated without proof, and is intended to prove another proposition.

B
Binomial theorem
 The following equation is called the binomial theorem. \[ \left( a + b \right) ^n = \sum _{k=0} ^{n} {}_{n} \rm C \it _{k} a^{n-k} b^{k} \ \ \rm \left( \it n \rm = 1, 2, \ldots \right) \] The coefficients \( {}_{n} \rm C \it _{k} \) are called binomial coefficients.

Bisection method
 A numerical method for finding a solution of an equation \( f(x) = 0 \) . This method consists of the following steps:

[1] Select an appropriate initial value of \( a_0 , b_0 \) that satisfies \[ f(a_0)f(b_0) \lt 0 , \ f(a_0) \gt 0 \] [2] If \( f \left( \frac{a_0 + b_0}{2} \right) \gt 0 \) , then \( a_1 = \frac{a_0 + b_0}{2} , \ b_1 = b_0 \) . If \( f \left( \frac{a_0 + b_0}{2} \right) \lt 0 \) , then \( a_1 = a_0 , \ b_1 = \frac{a_0 + b_0}{2} \) .

[3] Repeat the operation of finding \( a_{k+1} , b_{k+1} \) using \( a_k , b_k \ \ (k = 1, 2, 3, \ldots ) \) in the same way. Then, the following holds. \[ \lim _{k \to \infty} f \left( \frac{a_k + b_k }{2} \right) = 0 \]
C
Cartesian plane
 Cartesian plane is a plane with a horizontal axis (\( x \) -axis) and a vertical axis (\( y \) -axis) that intersect at right angles at the origin \( O \).
 The position of the point written on this plane is called the coordinate. The coordinates of the point \( P \), which has advanced \( x \) in the horizontal axis and \( y \) in the vertical axis from the origin, are denoted as \( P (x,y) \).

Cauchy-Riemann equations
 Let a complex function on a domain \( D \) be given by \[ \begin{align} f \left( z \right) = u \left( x, y \right) + i v \left( x,y \right) \end{align}\] If \( f \left( z \right) \) is holomorphic on \( D \), then the real-valued functions \( u \left( x, y \right) \) and \( v \left( x, y \right) \), both defined on \( D \), are \( C^1 \)-functions (i.e., continuously differentiable), and the following Cauchy-Riemann equations hold: \[ \begin{align} & \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \\\\ & \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x} \end{align}\]  Conversely, if both \( u \left( x, y \right) \) and \( v \left( x, y \right) \), both defined on \( D \), are \( C^1 \)-functions and the Cauchy-Riemann equations hold at every point of \( D \), then \( f \left( z \right) \) is holomorphic on \( D \).

Chain rule
[1] Let \( f \left( x,y \right) \) be a \( C^1 \)-function of two variables, and let \( x = \phi \left( t \right) \), \( y = \psi \left( t \right) \) be \( C^1 \)-functions of one variable. Then the composite function \( f \left( \phi \left( t \right) , \psi \left( t \right) \right) \) is a \( C^1 \)-function of one variable, and the following relation holds: \[ \begin{align} \frac{df}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt} \end{align}\]
Circle
 The circle is a set of all points in the Cartesian plane whose distance from \( ( x_m , y_m ) \) is equal to \( r \). The point \( ( x_m , y_m ) \) is called the centre, and the distance \( r \) is called the radius. The length of the straight line segment connecting the centre and two points of a circle is called the diameter. The length of one circuit along the circle is called the circumference. The circle is represented by the following equation. \[ \left( x - x_m \right) ^2 + \left( y - y_m \right) ^2 = r^2 \] The circumference is equal to the product of the diameter and \( \pi \).

Cofactor
 If \( A \) is the \( n \)th order square matrix, then the \( \left( i, j \right) \) minor of \( A \) is the determinant of the \( n-1 \)th square matrix formed by deleting the \( i \)-th row and \( j \)-th column of \( A \). The \( \left( i, j \right) \) cofactor is obtained by Multiplying the \( \left( i, j \right) \) minor by \( \left( -1 \right) ^{i+j}\).

Cofactor expansion
 When the \( \left( i, j \right) \) cofactor of the \( n \)th order matrix \( A \) is represented by \( \tilde{a} _{ij} \), the following expansion equation holds: \[ \begin{align} \left| A \right| &= a_{1j} \tilde{a} _{1j} + a _{2j} \tilde{a} _{2j} + \cdots + a _{nj} \tilde{a} _{nj} \ \ \ \left( j = 1,2, \cdots , n \right) \ \ \ldots \left( 1 \right) \\\\ \left| A \right| &= a_{i1} \tilde{a} _{i1} + a _{i2} \tilde{a} _{i2} + \cdots + a _{in} \tilde{a} _{in} \ \ \ \left( i = 1,2, \cdots , n \right) \ \ \ldots \left( 2 \right) \\\\ \end{align}\] \( \left( 1 \right) \) and \( \left( 2 \right) \) are referred to as cofactor expansion of the determinant along the \( j \) column and \( i \) row, respectively.

Cofactor matrix
 The \( n \)th order matrix with \( \tilde{a} _{ji} \) as the \( \left( i,j \right) \) element is called the cofactor matrix of \( A \) and is represented by \( \tilde{A} \). Thus, \[ \begin{align} \tilde{A} = \left( \begin{array}{cccc} \tilde{a}_{11} & \tilde{a}_{21} & \ldots & \tilde{a}_{n1} \\ \tilde{a}_{12} & \tilde{a}_{22} & \ldots & \tilde{a}_{n2} \\ \vdots & \vdots & & \vdots \\ \tilde{a}_{1n} & \tilde{a}_{2n} & \ldots & \tilde{a}_{nn} \end{array} \right). \end{align}\] For the cofactor matrix, the following holds. \[ \begin{align} \tilde{A} A = A \tilde{A} = \left| A \right| \cdot E_n \end{align}\]
Combination
 The number of combination of \( n \) objects taken \( r \) at a time is denoted by \( _n \rm C \it _r \). For the number of combination, the following holds. \[ \begin{align} _n \rm C \it _r &= \ _{n} \rm C \it _{n-r} \\\\ &= \frac{_n \rm P \it _r}{r!} \\\\ &= \frac{n(n-1)(n-2) \cdots (n-r+1)}{r(r-1)(r-2) \cdots 1} \\\\ &= \frac{n!}{r!(n-r)!} \end{align}\]
Complex conjugate matrix
 The matrix \( \left( a_{ij} ^* \right) \), in which each elements of the matrix \( A = \left( a_{ij} \right) \) is replaced by the complex conjugate of themselves, is called the complex conjugate matrix of \( A \) and is denoted by \( A^* \). The following holds for the complex conjugate matrix. \[ \begin{align} & \left( A^* \right) ^* = A \\\\ & \left( A + B \right) ^* = A^* + B^* \\\\ & \left( cA \right) ^* = c^* A^* \\\\ & \left( A B \right) ^* = A^* B^* \end{align}\]
Complex function
 A function that assigns a complex number \( w \) to each point \( z \) in a region \( D \) of the complex plane \[ \begin{align} w = f(z) \ \ \left( z \in D \right) \end{align}\] is called a complex function on \( D \).


Examples of complex functions
 In the following, let \( x \) and \( y \) be real numbers, and let \( i \) denote the imaginary unit. Also, define \( z = x + iy \), and consider complex functions in which \( z \) is the independent variable. Unless otherwise specified, the domain is assumed to be \( z \in \mathbb C \).

Linear fractional function \( T (z) \) \[ \begin{align} T (z) = \frac{az+b}{cz+d} \quad \left( ad - bc \neq 0 \quad \text{and} \quad cz + d \neq 0 \right) \end{align}\] Complex exponential function \( e^z \) \[ \begin{align} e^z = e^x \left( \cos y + i \sin y \right) \end{align}\] Trigonometric function \( \cos z \), \( \sin z \) \[ \begin{align} \cos z &= \frac{e^{iz} + e^{-iz}}{2} \\\\ \sin z &= \frac{e^{iz} - e^{-iz}}{2i} \end{align}\] Logarithmic function \( \log z \) \[ \begin{align} \log z = \ln |z| + i \arg z \quad \left( z \neq 0 , \quad - \pi \lt \arg z \lt \pi \right) \end{align}\] Exponential function \( a^z \) \[ \begin{align} a^z = e^{z \cdot \log a} = e^{z \cdot \left( \ln |a| + i \arg a \right)} \quad \left( a \neq 0 , \quad - \pi \lt \arg a \lt \pi \right) \end{align}\]
Limits and continuity
 Let \( f \left( z \right) \) be a complex function defined on \( D \subset \mathbb C \). At a point \( a \in D \), if for every \( \epsilon \gt 0 \) there exists a \( \delta \gt 0 \) such that \[ \begin{align} \left| z - a \right| \lt \delta \quad \text{implies} \quad \left| f \left( z \right) - b \right| \lt \epsilon \end{align}\] then \( b \) is called the limit of \( f \left( z \right) \) as \( z \to a \), and we write \[ \begin{align} \lim _{z \to a} f \left( z \right) = b \quad \text{or} \quad f \left( z \right) \to b \ \left( z \to a \right) \end{align}\] Furthermore, if \( b = f \left( a \right) \), we say that \( f \left( z \right) \) is continuous at \( a \). If \( f \left( z \right) \) is continuous at every point in \( D \), we say that \( f \left( z \right) \) is continuous on \( D \).


Complex differentiation
 Let \( f \left( z \right) \) be a complex function defined on a domain \( D \neq \varnothing \). For a point \( a \in D \), if the limit \[ \begin{align} \lim _{z \to a} \frac{f \left( z \right) - f \left( a \right)}{z-a} \end{align}\] exists, we say that \( f \left( z \right) \) is complex differentiable (or simply differentiable) at \( z = a \). This limit is called the derivative of \( f \left( z \right) \) at \( z = a \) and is denoted by \( f' \left( z \right) \).

 If a complex function \( w = f \left( z \right) \) defined on a domain \( D \neq \varnothing \) is complex differentiable at every point of \( D \), then the function that assigns to each point \( z \in D \) its derivative \( f' \left( z \right) \) is called the derivative function of \( f \left( z \right) \). It is denoted by \[ \begin{align} w' , \ f '\left( z \right), \ \frac{dw}{dz}, \ \frac{df}{dz} \end{align}\]  If \( f \left( z \right) \) is complex differentiable at every point of \( D \) and its derivative function \( f' \left( z \right) \) is continuous on \( D \), we say that \( f \left( z \right) \) is holomorphic (or analytic) on \( D \).

Complex numbers
 \( \sqrt{-1} \) is called the imaginary unit and it is denoted by \( i \). A number of the form \[ c = a + bi = a + ib \] , where \( a \) and \( b \) are real numbers, is called complex number. \( a \) is called the real part of the complex number \( c \), and \( b \) is called the imaginary part of the complex number \( c \). The real part and the imaginary part are represented as follows, respectively \[ a = \rm Re \ \it c\] \[ b = \rm Im \ \it c\] If the real part of \( c \) is equal to 0, ie. \[ c = ib ,\] then \( c \) is called the pure imaginary number.

 The equality and the four basic operations of complex numbers are defined as follows.

Equality \[ \begin{align} &1) \ \ a + ib = 0 \ \ \rm then \ \ \it a \rm = 0, \it b \rm = 0 \\\\ &2) \ \ a_1 + ib_1 = a_2 + ib_2 \ \ \rm then \ \ \it a \rm _1 = \it a \rm _2, \it b \rm _1 = \it b \rm _2 \end{align}\] Four basic operations

Addition \[ \left( a + ib \right) + \left( c + id \right) = \left( a + c \right) + i \left( b + d \right) \] Subtraction \[ \left( a + ib \right) - \left( c + id \right) = \left( a - c \right) + i \left( b - d \right) \] Multiplication \[ \begin{align} \left( a + ib \right) \left( c + id \right) &= ac + iad + ibc + i^2 bd \\\\ &= \left( ac - bd \right) + i \left( ad + bc \right) \end{align}\] Division \[ \begin{align} \frac{a + ib}{c + id} &= \frac{a + ib}{c + id} \cdot \frac{c - id}{c - id} \\\\ &= \frac{ac + bd}{c^2 + d^2} + i \frac{bc - ad}{c^2 + d^2} \end{align}\] From the above rules, the following holds true for complex numbers as well as real numbers.
[1] Commutative law \[ c_1 + c_2 = c_2 + c_1\] \[ c_1 c_2 = c_2 c_1\] [2] Associative law \[ \left( c_1 + c_2 \right) + c_3 = c_1 + \left( c_2 + c_3 \right) \] \[ \left( c_1 c_2 \right) c_3 = c_1 \left( c_2 c_3 \right)\] [3] Distributive law \[ c_1 \left( c_2 + c_3 \right) = c_1 c_2 + c_1 c_3 \]

Complex plane
 The Cartesian plane is called the complex plane or Gauss plane when the point \( \left( x,y \right) \) is taken to represent the complex number \( z = x + iy \). In the complex plane, the \( x \) -axis is called the real axis, and the \( y \) -axis is called the imaginary axis.
 In the complex plane, if the distance from the origin \( O \) to the point \( P \left( x,y \right) \) is \( r \), and the angle between the line segment \( OP \) and the real axis is \( \theta \), the following holds. \[ \begin{align} x &= r \cos \theta \\\\ y &= r \sin \theta \end{align}\] Therefore, the complex number \( z \) can be written as: \[ z = r \left( \cos \theta + i \sin \theta \right).\] This is called the polar form of \( z \), where \( r \) is the absolute value of \( z \), and \( \theta \) is the argument of \( z \). The absolute value of \( z \) is denoted by \( |z| \), and the argument of \( z \) is denoted by \( \rm arg \ \it z \). Thus, the following holds. \[ |z| = r = \sqrt{x^2 + y^2} \] \[ \rm arg \ \it z \rm = \it \theta \]  The following formulas hold for the absolute values and arguments of two complex numbers \( z_1 \) and \( z_2 \).

Sum and Difference \[ \begin{align} \left| z_1 \pm z_2 \right| & \leq | z_1 | + | z_2 | \ \ \ \text{(Equality holds only when} \ \ \rm arg \ \it z \rm _1 \rm = arg \ \it z \rm _2 \ \text{.)} \\\\ \end{align}\] Product \[ \begin{align} | z_1 z_2 | &= | z_1 | | z_2 | \\\\ \rm arg \ \left( \it z \rm _1 \it z \rm _2 \right) &= \rm arg \ \it z \rm _1 + arg \ \it z \rm _2 \end{align}\] Quotient \[ \begin{align} \left| \frac{z_1}{z_2} \right| &= \frac{| z_1 |}{| z_2 |} \\\\ \rm arg \ \left( \frac{\it z \rm _1}{\it z \rm _2} \right) &= \rm arg \ \it z \rm _1 - arg \ \it z \rm _2 \end{align}\]
Conjugate complex numbers
 For a complex number \( z = x + iy \), \( x - iy \) is called the conjugate complex number of \( z \), and is represented as \( z^* = x - iy \). In the complex plane, \( z \) is the mirror-image of \( z^* \) in the real-axis. Also, the product of \( z \) and \( z^* \) is a real number and is equal to the square of \( |z| \): \[ \begin{align} zz^* &= \left( x + iy \right) \left( x - iy \right) \\\\ &= x^2 + y^2 \\\\ &= |z|^2 \end{align}\]
Conjugate transpose
 The conjugate transpose of an matrix \( A \) is denoted by \( A ^{\dagger} \) . That is to say, \[ \begin{align} A ^{\dagger} = ^t \left( A ^* \right) \end{align}\]  For conjugate transpose of an matrix \( A \) , the following holds. \[ \begin{align} & \left( A ^{\dagger} \right) ^{\dagger} = A \\\\ & \left( A + B \right) ^{\dagger} = A ^{\dagger} + B ^{\dagger} \\\\ & \left( cA \right) ^{\dagger} = c^* A ^{\dagger} \\\\ & \left( A B \right) ^{\dagger} = B ^{\dagger} A ^{\dagger} \end{align}\]
Constant
 When a character or symbol is treated as a fixed value, that character or symbol is called a constant.

Continuity
 The function \( y = f(x) \) is continuous at \( a \) if \[ \lim_{x \to a} f(x) = f(a) \ .\] If the function \( y = f(x) \) is differentiable at \( a \) , then \( y = f(x) \) is continuous at \( a \) .

Cramer's rule
 Let \( A \) be the \( n \)th order invertible matrix, \( \boldsymbol x \) is an unknown column vector with \( n \) elements, and \( \boldsymbol b \) is a known column vector with \( n \) elements. The system of linear equations with \( n \) unknowns \[ \begin{align} A \boldsymbol x = \boldsymbol b \end{align}\] has a unique solution: \[ \begin{align} x_j &= \frac{\left| A_j \right|}{\left| A \right|} \ \ \left( j = 1,2, \ldots , n \right) \end{align}\] , where \( A_j \) is a matrix in which column \( j \) of matrix \( A \) is replaced by \( \boldsymbol b \). This is called Cramer's rule.

Curve
 Let \( [a,b] \) be a real interval. Suppose \( x(t) \) and \( y(t) \) are real-valued functions that are continuous on this interval. Then the complex-valued function \[ \begin{align} z(t) = x(t) + i y(t) \ \ \left( a \leq t \leq b \right) \ \ \ldots (1) \end{align}\] and the corresponding set of points on the complex plane that satisfies equation (1) are both referred to as a (continuous) curve. We call \( z(a) \) the initial point and \( z(b) \) the terminal point of the curve. In particular, if the initial and terminal points coincide, i.e., \( z(a) = z(b) \), the curve is called a closed curve. Moreover, if no points on a closed curve coincide except for the initial and terminal point, the curve is said to be simple.

Curves of the second order
 The curves of the second order or conic section is a set of all points in the Cartesian plane that satisfy the following equation: \[ A x^2 + B y^2 + 2Cxy + 2Dx + 2Ey + F = 0 \] where \( A \), \( B \), \( C \), \( D \), \( E \), and \( F \) are constants. And \( A \), \( B \), \( C \) are not equal to 0 simultaneously.
 The curves of the second order can be broadly divided into three types: parabolas, ellipses, and hyperbolas.

D
Decimals
 A number expressed using the decimal point (.). Decimals can be classified into terminating decimals and non-terminating decimals. And non-terminating decimals can be classified into repeating decimals and non-repeating decimals. The repeating part of the repeating decimals is called the repetend. Repetend is represented by drawing a horizontal line (a vinculum) above it. Terminating decimals and repeating decimals are rational numbers, and non-repeating decimals are irrational numbers.

Example:
 \( 1.25 \) is a terminating decimal, and \( 1.25 = \frac{5}{4} \).
 \( 0.272727 \ldots = 0.\overline{27} \) is a repeating decimal, and \( 0.\overline{27} = \frac{3}{11} \).
 \( 3.141592 \ldots = \pi \) is non-repeating decimal.

Definite integral
 Let \( F(x) \) be one of the antiderivative of the function \( f(x) \). In this case, \( F(b) - F(a) \) is called the definite integral from \( a \) to \( b \) of \( f(x) \), and is denoted by \[ \int ^b _a f(x) dx ,\] where \( a \) is called the lower limit (or lower bound) of integration, and \( b \) is called the upper limit (or upper bound) of integration. In addition, \( F(b) - F(a) \) is denoted as \( \left[ F(x) \right] ^b _a \).
 For definite integrals, the following formula holds: \[ \frac{d}{dx} \int ^x _a f(t)dt = f(x) \ \ \left( a \ \rm is \ constant \it \right) \] \[ \int ^b _a c f(x)dx = c \int ^b _a f(x)dx \ \ \left( c \ \rm is \ constant \it \right) \] \[ \int ^b _a \left\{ f(x) \pm g(x) \right\} dx = \int ^b _a f(x)dx \pm \int ^b _a g(x)dx \] \[ \int ^a _a f(x)dx = 0\] \[ \int ^b _a f(x)dx = - \int ^a _b f(x)dx\] \[ \int ^b _a f(x)dx = \int ^c _a f(x)dx + \int ^b _c f(x)dx\] Integration by substitution
 If \( x = g(u) \) is differentiable on the real interval \( \left[ \alpha , \beta \right] \), and \( a = g \left( \alpha \right) \) and \( b = g \left( \beta \right) \) then the following equation holds: \[ \begin{align} \int _a ^b f(x) dx = \int _{\alpha} ^{\beta} f \left( g \left( u \right) \right) g'(u) du. \end{align}\] Integration by parts
 If both \( f(x) \) and \( g(x) \) is differentiable on the real interval \( \left[ a,b \right] \), then the following equation holds: \[ \begin{align} \int _a ^b f(x) g'(x) dx = \left[f(x)g(x) \right] ^b _a - \int _a ^b f'(x) g(x) dx. \end{align}\] Definite integrals and area
 Suppose that \( f(x) \geq g(x) \) holds for the real interval \( \left[ a,b \right] \). In this case, if \( S \) is the area of the part bounded by the two curves \( y = f(x) \), \( y = g(x) \) and the two straight lines \( x = a \) and \( x = b \), then the following equation holds: \[ S = \int ^b _a \left\{ f(x) - g(x) \right\} dx.\]
de Moivre's theorem
 With \( n \) as an integer, \( \theta \) as a real number, and \( i \) as an imaginary unit, the following holds: \[ \cos n \theta + i \sin n \theta = \left( \cos \theta + i \sin \theta \right)^n \] This is called the de Moivre's theorem.

Derivative
 For a function of one variable \( y = f(x) \) whose domain is a real interval containing \( x = a \), if \[ \lim _{h \to 0} \frac{f(a + h) - f(a)}{h} \] exists, then \( f(x) \) is differentiable at \( x = a \), and this limit is called the derivative of \( f(x) \) at \( a \), and is denoted by \( f ' (a) \). Derivative of \( f(x) \) at \( a \) is equal to the slope of the tangent line at \( x = a \) of \( f(x) \).
 Let the function of one variable \( y = f(x) \) be differentiable on a real interval containing \( a \). At this time, the following holds: \[ \rm If \ \it f \ ' \rm ( \it a \rm ) \gt 0, \ \rm then \ \it f \ \rm ( \it x \rm ) \ is \ strictly \ increasing \ at \ \it x \rm = \it a . \] \[ \rm If \ \it f \ ' \rm ( \it a \rm ) \lt 0, \ \rm then \ \it f \ \rm ( \it x \rm ) \ is \ strictly \ decreasing \ at \ \it x \rm = \it a . \]
Derivative function
 If a function \( y = f (x) \) is differentiable at every points \( x \) in a real interval, the function that corresponds the derivative \( f ' (x) \) to \( x \) is called the derivative function of \( y = f (x) \), and is expressed as \( y' \), \( f '(x) \), \( \frac{dy}{dx} \), \( \frac{df}{dx} \), \( \frac{d}{dx} f (x) \), and so on. Finding its derivative function \( y = f'(x) \) from \( y = f(x) \) is called "differentiation".
 Several formulas for differentiation hold.

[Sum rule and constant factor rule]
 If the functions \( f(x) \) and \( g(x) \) are differentiable on the real interval \( I \), then, with \( c \) as a real constant, \( cf(x) \) and \( f(x) \pm g(x) \) are also differentiable on \( I \), and the following equations hold. \[ \left\{ cf(x) \right\} ' = c f'(x) \] \[ \left\{ f(x) \pm g(x) \right\} ' = f'(x) \pm g'(x)\]
[Product rule and quotient rule]
 If the functions \( f(x) \) and \( g(x) \) are differentiable on the real interval \( I \), then \( f(x)g(x) \) and \( \frac{f(x)}{g(x)} \) (where \( g(x) \neq 0 \)) are also differentiable on \( I \), and the following equations hold: \[ \left\{ f(x)g(x) \right\} ' = f'(x)g(x) + f(x)g'(x) \] \[ \left\{ \frac{f(x)}{g(x)} \right\} ' = \frac{f'(x)g(x) - f(x)g'(x)}{\left\{ g(x) \right\} ^2 } \]
[Chain rule for composite functions]
 Suppose that the function \( u = f(x) \) is differentiable on the real interval \( I \), the function \( y = g(u) \) is differentiable on the real interval \( J \), and the image of \( u = f(x) \) is included in \( J \). In this case, the composite function \( y = g(f(x)) \) is differentiable on the real interval \( I \), and the following equation holds. \[ y' = g'(u)f'(x) \]
[Inverse function rule]
 Let \( y = f(x) \) be a monotonic (increasing or decreasing) function on a real interval \( I \), and assume that \( f \) is differentiable on \( I \). Then the inverse function \( x = f^{-1}(y) \) is differentiable at those values of \( y \) that correspond to points \( x \) where \( f'(x) \neq 0 \), and the following formula holds. \[ x' = \frac{1}{y'} \]
 The following are the rules for the derivatives of the basic functions.

\[ \begin{align} &\left( x^n \right) ' = n x^{n-1} \ \ ( n \ \rm{is} \ \rm{integer, \ and} \ \it{x} \ \rm{is \ real.} ) \\\\ &\left( x^a \right) ' = a x^{a-1} \ \ ( a \ \rm{is} \ \rm{real, \ and} \ \it{x} \ \rm{is \ a \ positive \ real \ number.} ) \end{align} \]
Trigonometric functions \[ \begin{align} &\left( \sin x \right) ' = \cos x \ \ (x \ \rm is \ real.) \\\\ &\left( \cos x \right) ' = - \sin x \ \ (x \ \rm is \ real.) \\\\ &\left( \tan x \right) ' = \frac{1}{\cos ^2 x} \ \ (\rm Where \ \ \it x \ \ \rm is \ a \ real \ number \ except \ \frac{(2 \it n \rm -1) \pi}{2} \ \rm in \ which \ \it n \ \rm is \ an \ integer.) \end{align} \]
Functions of exponential and natural logarithm \[ \begin{align} &\left( e^x \right) ' = e^x \ \ ( x \ \rm{is \ real.} ) \\\\ &\left( a^x \right) ' = a^x \ln a \ \ ( x \ \rm{is \ real, \ and} \ \it{a} \ \rm{is \ a \ positive \ real \ number} ) \\\\ &\left( \ln x \right) ' = \frac{1}{x} \ \ ( x \rm \gt 0 ) \end{align} \]
Inverse trigonometric functions \[ \begin{align} &\left( \sin ^{-1} x \right) ' = \frac{1}{\sqrt{1-x^2}} \ \ ( -1 \lt x \lt 1 ) \\\\ &\left( \cos ^{-1} x \right) ' = - \frac{1}{\sqrt{1-x^2}} \ \ ( -1 \lt x \lt 1 ) \\\\ &\left( \tan ^{-1} x \right) ' = \frac{1}{1 + x^2} \ \ ( - \infty \lt x \lt \infty ) \end{align} \]

 Define the \( \boldsymbol n \)th order derivative function \( f^{\left( n \right)} (x) \) of the function \( y = f(x) \) with \( n \) as a natural number.
[1] \( f^{\left( 1 \right) } (x) = f'(x) \)
[2] \( f^{\left( n \right) } (x) = \left\{ f^{\left( n - 1 \right) } (x) \right\} ' \)
Other symbols representing the \( n \)th order derivative function are \( y^{\left( n \right)} \), \( \frac{d^n y}{dx^n} \) and \( \frac{d^n}{dx^n} f(x) \).

 If the \( n \)th derivative \( f^{\left( n \right) } (x) \) of a function \( y = f(x) \) exists, then \( f(x) \) is said to be \( \boldsymbol n \)-times differentiable. Furthermore, if \( f^{\left( n \right) } (x) \) is continuous, \( f(x) \) is called an \( \boldsymbol n \)-times continuously differentiable function, or a \( \boldsymbol C^n \)-function. In addition, if the \( m \)th derivative \( f^{\left( m \right) } (x) \) exists for every natural number \( m \), then \( f(x) \) is called a \( \boldsymbol C^{\infty} \)-function.

Determinant
 Let \( S_n \) be the set of all permutations of \( n \) elements. For the \( n \)th order square matrix \( A = \left( a_{ij} \right) \), \[ \begin{align} \sum _{\sigma \in S_n } \rm sgn \ \sigma \cdot \it a_{\rm 1 \it \sigma \rm \left( 1 \right)} \it a_{\rm 2 \it \sigma \rm \left( 2 \right)} \cdots \it a_{n \sigma \rm \left( \it n \rm \right)} \end{align}\] is called the determinant of matrix \( A \) , where \[ \begin{align} \sum _{\sigma \in S_n } \end{align}\] represents the sum of over all permutations of \( n \) elements in the set \( S_n \). The determinant of matrix \( A \) is denoted by the following symbols: \[ \begin{align} \left| \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{n1} & a_{n2} & \ldots & a_{nn} \end{array} \right| , \ \ \ \left| A \right| , \ \ \ \rm det \it \ A. \end{align}\]  If the column vector of matrix \( A \) is \( \boldsymbol a_1 , \ \ \boldsymbol a_2 , \ \ \cdots , \ \ \boldsymbol a_n \), then the determinant of matrix \( A \) may also be denoted by \[ \begin{align} \rm det \left( \it \boldsymbol a \rm _1 , \ \ \it \boldsymbol a \rm _2 , \ \ \cdots , \ \ \it \boldsymbol a_n \right). \end{align}\]
Diagonalizable
 The \( n \)th order square matrix \( A \) is called diagonalizable if there exists an invertible matrix \( P \) and a diagonal matrix \( D \) such that \( P^{-1} A P = D\).

Diagonal matrix
 For an \( n \)th order square matrix \( A = \left( a_{ij} \right) \), the diagonal is the diagonal line of elements \( a_{ii} \ \ \left( i = 1,2, \ldots , n \right) \). A diagonal matrix is an \( n \)th order square matrix in which the elements outside the diagonal are all zero. Thus, the diagonal matrix has the following form: \[ \begin{align} A = \left( \begin{array}{cccc} a_{11} & 0 & \ldots & 0 \\ 0 & a_{22} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & a_{nn} \end{array} \right). \end{align}\]
Difference
 The result of subtraction.

Differential equation
 An equation containing a (partial) derivative function of an unknown function is called a differential equation. A differential equation when the unknown function is a function of one variable is called an ordinary differential equation, and a case where an unknown function of two or more variables is called a partial differential equation. A function that satisfies a differential equation is called its solution. A solution containing an arbitrary function is called a general solution, and a solution that has a specially defined arbitrary function is called a particular solution.
 When the highest order of the (partial) derivative function in a differential equation is an \( n \)th order (partial) derivative function, it is called an \( \boldsymbol n \)th order differential equation.

 Consider a term consisting of a term in the form of a linear function whose (partial) derivative function is an independent variable \( \mathfrak D \). \[ a \mathfrak D \rm + \it b \] , where \( a \) and \( b \) are arbitrary known functions. A differential equation that contains only such terms is called linear, and a differential equation that is not linear is called non-linear.
 A differential equation in which all terms contain an unknown function and its (partial) derivative or is 0 is called homogeneous, and a differential equation that is not homogeneous is called heterogeneous.

Distance
 In the Cartesian plane, the distance \( d \) between the point \( P ( x_1 , y_1 ) \) and the point \( Q ( x_2 , y_2 ) \) is given by: \[ d = \sqrt{ \left( x_2 - x_1 \right) ^2 + \left( y_2 - y_1 \right) ^2 }. \]
Distributive law
 One of the laws of calculation by addition and multiplication. \[ a ( b + c ) = a b + a c \]
Double exponential function
 Suppose that \( a \gt 0 \) and \( b \gt 0 \). In this case, the following function is called double exponential function. \[ f(x) = a^{b^x} \]
E
Eigenvalue
 Let \( A \) be the \( n \)th order matrix and \( E_n \) be the \( n \)th order identity matrix. The following equation for \( \lambda \) is called an eigenequation or characteristic equation. \[ \begin{align} \left| A - \lambda E_n \right| = \left| \begin{array}{cccc} a_{11} - \lambda & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} - \lambda & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{n1} & a_{n2} & \ldots & a_{nn} - \lambda \end{array} \right| = 0 \end{align}\] The eigenequation is a \( n \)th order equation for \( \lambda \), and its solutions, \( \lambda _1 , \lambda _2 , \cdots , \lambda _n \), are called eigenvalues. If \( \boldsymbol x \) is an unknown column vector with \( n \) elements, then for each eigenvalue \( \lambda _k \left( k = 1,2, \cdots , n \right) \), the following equation has a non-trivial solution \( \boldsymbol x_k \). \[ \begin{align} A \boldsymbol x = \lambda _k \boldsymbol x \ \ \ \left( k = 1,2, \cdots , n \right) \end{align}\] The column vector with \( n \) elements \( \boldsymbol x_k \), which is the solution of this equation, is called the eigenvector with respect to \( \lambda _k \).

Elementary matrix
 The following three types of invertible matrices are called elementary matrices.

[1] A matrix \( P_n \left( i , j \right) \) which is the \( n \)th order identity matrix whose columns \( i \) and \( j \) are swaped.

[2] A matrix \( Q_n \left( i ; \ c \right) \) which is the \( n \)th order identity matrix whose \( \left( i, i \right) \) elements is replaced into a non-zero constant \( c \).

[3] A matrix \( R_n \left( i,j ; \ c \right) \) which is the \( n \)th order identity matrix whose \( \left( i, j \right) \) elements \( \left( i \neq j \right) \) is replaced into a constant \( c \).

 Multiplying the left (right) of matrix \( A \) by the elementary matrix is called the elementary row (column) operation. Together, the elementary row operation and the elementary column operation are called elementary operations.

Ellipse
 An ellipse is a type of conic section. If the center coordinates of the ellipse are \( (x_m , y_m ) \) and one of the axes of the ellipse is parallel to the \( x \)-axis, the equation for the ellipse is as follows. \[ \frac{\left( x - x_m \right) ^2 }{a^2} + \frac{\left( y - y_m \right) ^2 }{b^2} = 1 \ \ (a \gt 0, b \gt 0) \]
Equation
 A mathematical expression in which unknown variables, constants, numbers, and arithmetic symbols are arranged appropriately and connected by an equal sign \( (=) \) is called an equation.
Example: \( x + a = 3 \) is an equation for \( x \) if \( x \) is an unknown variable and \( a \) is a constant.

Euler method
 The Euler method is one of the numerical integration methods for ordinary differential equations. For an unknown function \( y = f(x) \), suppose that its derivative function \( y' = f'(x) \) is known. Assuming \( n \) is a natural number, \( h \) is the step size, and the initial value is \( \left( x_0,y_0 \right) \), the numerical solution \( \left( x_n,y_n \right) \) is obtained as follows. \[ \begin{align} x_n &= x_{n-1} + h \\\\ y_n &= y_{n-1} + hf'(x_{n-1}) \end{align}\]
Euler's formula
 With \( i \) as the imaginary unit and \( \theta \) as a real number, the following equation holds. \[ e^{i \theta} = \cos \theta + i \sin \theta \] This is called the Euler's formula.

Euler's number
 The exponential function whose tangent slope (derivative) at \( x = 0 \) is \( 1 \) is expressed as \[ y = e^x \] where \( e \) is called the Euler's number or Napier's constant.
 The following limit formulas hold for \( e \): \[ \lim _{x \to 0} \frac{e^x - 1}{x} = 1 \] \[ \lim _{x \to 0} \left( 1+x \right) ^{\frac{1}{x}} = e \]
Expectation
 Suppose that the number of possible values of a random variable \( X \) is finite, and they are represented as \( x_1,\ x_2, \ \cdots ,\ x_n \). In this case, the expectation (or mean) of \( X \) is denoted by \( E(X) \) and is defined by the following equation. \[ E(X) = \sum _{i=1} ^n x_i \cdot P \left( X = x_i \right) \] Also, if the random variable \( X \) takes a real value that satisfies \[ - \infty \leq a \leq X \leq b \leq \infty \], then the expectation \( E(X) \) of the random variable \( X \) is defined by the following equation using the probability density function \( f(x) \) of \( X \). \[ E(X) = \int _a ^b x f(x) dx \] Linearity of expectation
 For the expectation of the random variable \( X \), if \( a \) is a constant, the following holds. \[ E(X+a) = E(X) + a \] \[ E(aX) = aE(X) \] Expectation of the product of probability spaces
 Let \( X \) be the random variable in probability space \( \left( \Omega _1 , \mathcal P \rm ( \Omega _1) , \it P \rm _1 \right) \) and let \( Y \) be the random variable in probability space \( \left( \Omega _2 , \mathcal P \rm ( \Omega _2) , \it P \rm _2 \right) \). For \( E(X+Y) \) and \( E(XY) \) defined in the product of these two probability spaces, the following holds: \[ E(X+Y) = E(X) + E(Y) \] \[ E(XY) = E(X) \cdot E(Y)\] Related inequalities
 \( f(x) \) is a non-negative monotonically increasing function defined on the real interval \( I \). Also, let \( f( \alpha ) \neq 0 \) hold for the constant \( \alpha \in I \). At this time, with \( X \) as a random variable, the following inequality holds. \[ P \left( X \geq \alpha \right) \leq \frac{E \left( f(X) \right)}{f( \alpha )}\] Similarly, if \( g(x) \) is a non-negative monotonically decreasing function defined on the real interval \( J \) and let \( f( \alpha ) \neq 0 \) hold for the constant \( \alpha \in I \), then the following inequality holds. \[ P \left( X \leq \beta \right) \leq \frac{E \left( g(X) \right)}{g( \beta )}\] From these inequalities, for the constant \( c \gt 0 \), the following Chebyshev's inequality holds. \[ P \left( |X - E(X)| \geq c \right) \leq \frac{E \left( \left( X - E(X) \right) ^2 \right)}{c^2} = \frac{V(X)}{c^2} \]
Explicit method
 The explicit method is one of the numerical integrals for partial differential equations with time \( t \) as a variable.

Exponential function
 Suppose that \( a \) is a positive real number. The function \[ y = a^x \] is called an exponential function with base \( a \).

Extremum
 When the function of one variable \( y = f(x) \) turns from an increasing trend to a decreasing trend at the boundary of \( x = a \), \( f(a) \) is called a local maximum.
 On the other hand, when the function of one variable \( y = f(x) \) turns from an decreasing trend to a increasing trend at the boundary of \( x = a \), \( f(a) \) is called a local minimum.
 Each of a local maximum and a local minimum is called an extremum; together, they are called extrema.

 Suppose that function of one variable \( y = f(x) \) is differentiable on the real interval containing \( a \). In this case, if \( y = f(x) \) takes the extremum on \( x = a \), then \( f'(a) = 0 \).

F
Factorial
 The product of natural numbers from \( 1 \) to \( n \) is called the factorial of \( n \), represented as \( n! \). \[ n! = n(n-1)(n-2) \cdots 3 \cdot 2 \cdot 1 = \prod _{k=1} ^n k \] However, \( 0!=1 \).

Finite difference approximation
 A method for approximating the derivative function.

[1] Finite difference approximation of a one-variable function \( f \left( x \right) \).

・Forward difference approximation \[ \frac{df}{dx} \cong \frac{f \left( x + h \right) - f \left( x \right)}{h} \] ・Backward difference approximation \[ \frac{df}{dx} \cong \frac{f \left( x \right) - f \left( x - h \right)}{h} \] ・Central difference approximation \[ \begin{align} \frac{df}{dx} & \cong \frac{f \left( x + h \right) - f \left( x - h \right)}{2h} \\\\ \frac{d^2f}{dx^2} & \cong \frac{f \left( x + h \right) - 2 f \left( x \right) + f \left( x - h \right)}{h^2} \end{align}\]
[2] Finite difference approximation of a two-variable function \( f \left( x , y \right) \).

・Forward difference approximation \[ \begin{align} \frac{\partial f}{\partial x} & \cong \frac{f \left( x + h , y \right) - f \left( x , y \right)}{h} \\\\ \frac{\partial f}{\partial y} & \cong \frac{f \left( x , y + h \right) - f \left( x , y \right)}{h} \end{align}\] ・Backward difference approximation \[ \begin{align} \frac{\partial f}{\partial x} & \cong \frac{f \left( x , y \right) - f \left( x - h, y \right)}{h} \\\\ \frac{\partial f}{\partial y} & \cong \frac{f \left( x , y \right) - f \left( x , y - h \right)}{h} \end{align}\] ・Central difference approximation \[ \begin{align} \frac{\partial f}{\partial x} & \cong \frac{f \left( x + h, y \right) - f \left( x - h, y \right)}{2h} \\\\ \frac{\partial f}{\partial y} & \cong \frac{f \left( x , y + h \right) - f \left( x , y - h \right)}{2h} \\\\ \frac{\partial ^2 f}{\partial x^2} & \cong \frac{f \left( x + h, y \right) - 2 f \left( x, y \right) + f \left( x - h, y \right)}{h^2} \\\\ \frac{\partial ^2 f}{\partial y^2} & \cong \frac{f \left( x , y + h \right) - 2 f \left( x, y \right) + f \left( x , y - h \right)}{h^2} \end{align}\]
Finite difference schemes
 A general term for numerical integration methods that obtain numerical solutions by solving difference equations, which express differential equations using finite difference approximations.

Finity
 The state of being limited or finite.

Floor function
 A function represented by the following equation. It can also be described as a function that represents the operation of truncating the decimal part of a number. \[ y = \lfloor x \rfloor \ \ ( \lfloor x \rfloor \rm represents \ the \ greatest \ integer \ less \ than \ or \ equal \ to \ \it x \rm . ) \]
Fourier series
 Let the one-variable function \( f(x) \) satisfy the following conditions (i) and (ii).

(i) \( f(x) \) is a periodic function with a period of \( 2L \).

(ii) \( f(x) \) and \( f'(x) \) are piecewise-continuous on the real interval \( (-L, L) \).

In this case, if \[ \begin{align} a_n &= \frac{1}{L} \int ^L _{-L} f (x) \cos \frac{n \pi x}{L} dx \ \ \ \ \left( n = 0,1,2, \ldots \right) \\\\ b_n &= \frac{1}{L} \int ^L _{-L} f (x) \sin \frac{n \pi x}{L} dx \ \ \ \ \left( n = 1,2, \ldots \right) \end{align}\] holds, then \[ \frac{a_0}{2} + \sum ^{\infty} _{n=1} \left( a_n \cos \frac{n \pi x}{L} + b_n \sin \frac{n \pi x}{L} \right) \ \ \ \ldots (*)\] satisfies the following:

(a) When \( x \) is a continuous point, it converges to \( f(x) \).

(b) When \( x \) is a discontinuous point, it converges to \( \left\{ f(x+0) + f(x-0) \right\} / 2 \).

The \( a_n \) and \( b_n \) are called Fourier coefficients, and \( (*) \) is called the Fourier series of \( f(x) \).

The Fourier sine series and the Fourier cosine series
 The Fourier series for an even function \( f(x) \) with period of \( 2L \) is given by: \[ \begin{align} & \frac{a_0}{2} + \sum ^{\infty} _{n=1} a_n \cos \frac{n \pi x}{L} \\\\ & a_n = \frac{2}{L} \int ^L _{0} f (x) \cos \frac{n \pi x}{L} dx \ \ \ \ \left( n = 0,1,2, \ldots \right) \end{align}\] This is called the Fourier cosine series. On the other hand, the Fourier series for an odd function \( f(x) \) with period of \( 2L \) is given by: \[ \begin{align} & \sum ^{\infty} _{n=1} b_n \sin \frac{n \pi x}{L} \\\\ & b_n = \frac{2}{L} \int ^L _{0} f (x) \sin \frac{n \pi x}{L} dx \ \ \ \ \left( n = 1,2, \ldots \right) \end{align}\] This is called the Fourier sine series.

Expansion on a half of the interval
 When a one-variable function \( f(x) \) defined on the real interval \( \left[0, L \right] \) is extended as an even function with period of \( 2L \), the Fourier series of \( f(x) \) is given by the Fourier cosine series. This is called the Fourier cosine series on a half of the interval. On the other hand, when \( f(x) \) is extended as an odd function, the Fourier series of \( f(x) \) is given by the Fourier sine series. This is called the Fourier sine series on a half of the interval.

Representation using complex numbers
 Using the exponential representation of trigonometric functions with complex numbers, the Fourier series for \( f(x) \) is expressed as: \[ \begin{align} \sum ^{\infty} _{n = - \infty} c_n e^{i \left( \frac{n \pi x}{L} \right) } \end{align}\] where the Fourier coefficients \( c_n \) are given by: \[ \begin{align} c_n = \frac{1}{2L} \int ^L _{-L} f(x) e^{-i \left( \frac{n \pi x}{L} \right)} dx \end{align}\] Double Fourier series
 Let \( L_1 \) and \( L_2 \) be arbitrary positive constants. Suppose the two-variable function \( f \left( x, y \right) \) satisfies the following conditions for any \( x \) and \( y \): \[ \begin{align} f \left( x + 2L_1, y \right) = f \left( x, y + 2L_2 \right) = f \left( x, y \right) \end{align}\] In this case, the double Fourier series for \( f \left( x, y \right) \) is defined by the following expression: \[ \begin{align} \sum _{m = - \infty} ^{\infty} \sum _{n = - \infty} ^{\infty} c_{nm} e^{i \left( \frac{n \pi x}{L_1} + \frac{m \pi y}{L_2} \right)} \end{align}\] where the double Fourier coefficients are defined by the following expression: \[ \begin{align} c_{nm} = \frac{1}{4 L_{1} L_{2}} \int ^{L_1} _{-L_1} \left\{ \int ^{L_2} _{-L_2} f \left( x, y \right) e^{-i \left( \frac{n \pi x}{L_1} + \frac{m \pi y}{L_2} \right)} dy \right\} dx \end{align}\] Multiple Fourier series
 Let \( L_1 , L_2 , \ldots , L_N \) be arbitrary positive constants. Suppose the \( N \)-variable function \( f \left( x_1, x_2, \ldots , x_N \right) \) satisfies the following conditions for any \( x_1, x_2 , \ldots , x_N \): \[ \begin{align} & \ \ \ \ \ \ f \left( x_1 + 2L_1, x_2 , \ldots , x_N \right) \\\\ &= f \left( x_1, x_2 + 2L_2 , \ldots , x_N \right) \\\\ & \ \ \cdots \cdots \cdots \\\\ &= f \left( x_1, x_2, \ldots , x_N + 2L_N \right) = f \left( x_1, x_2, \ldots , x_N \right) \end{align}\] In this case, the \( \boldsymbol n \)-dimensional Fourier series for \( f \left( x_1, x_2, \ldots , x_N \right) \) is defined as: \[ \begin{align} \sum _{n_N = - \infty} ^{\infty} \cdots \sum _{n_2 = - \infty} ^{\infty} \sum _{n_1 = - \infty} ^{\infty} c_{n_{1} n_{2} \cdots n_{N}} e^{i \sum _{i=1} ^N \frac{n_i \pi x_i}{L_i}} \end{align}\] where \( \boldsymbol n \)-dimensional Fourier coefficients \( c_{n_{1} n_{2} \cdots n_{N}} \) are defined as: \[ \begin{align} c_{n_{1} n_{2} \cdots n_{N}} = \frac{1}{2^N L_{1} L_{2} \cdots L_{N}} \int ^{L_1} _{-L_1} \left\{ \int ^{L_2} _{-L_2} \left\{ \cdots \left\{ \int ^{L_N} _{-L_N} f \left( x_1, x_2, \ldots , x_N \right) e^{-i \sum _{i=1} ^N \frac{n_i \pi x_i}{L_i}} dx_{N} \right\} \cdots \right\} dx_2 \right\} dx_1 \end{align}\]

Fraction
 One of the notations for representing a quotient. \[ a \div b = \frac{a}{b} \] The dividend \( a \) is called the numerator, and the divisor \( b \) is called the denominator.
 Aligning the denominators of two or more fractions is called finding a common denominator. For example, let \( l \) and \( n \) be natural numbers, and \( k \) and \( m \) be integers, with \( l \neq n \). In this case, finding a common denominator is performed as follows: \[ \frac{k}{l} + \frac{m}{n} = \frac{kn+lm}{ln} \]
Function
 A relationship in which the value of a variable is determined by the values of other variables, the values of constants, and the numbers. For example, when the variable \( y \) is uniquely determined by the value of the variable \( x \), we say "\( y \) is a function of \( x \)," and it is expressed as \( y = f(x) \) or simply \( f(x) \). In this case, \( x \) is called the independent variable and \( y \) is called the dependent variable. The range of values that an independent variable can take is called the domain, and the range of values that the dependent variable can take is called the image. Also, the function \( f (x) \) with \( x = a \) is denoted as \( f(a) \).

 A function of one independent variable is called a univariate function, while a function of two or more independent variables is called a multivariate function.

 If there are two functions \( u = f(x) \) and \( y = g(u) \), and the image of \( u = f(x) \) is included in the domain of \( y = g(u) \), then the function \( y = g(f(x)) \) can be defined, which is called the composite function of \( f \) and \( g \).

 For a function \( y = f(x) \), if \( f(a) \leq f(b) \) for all \( a \leq b \) in its domain, \( f(x) \) is called a monotonically increasing function. On the other hand, if \( f(a) \geq f(b) \) for all \( a \leq b \) in its domain, \( f(x) \) is called a monotonically decreasing function.

 Let \( y = f(x) \) be a continuous and monotonic (increasing or decreasing) function on its domain. In this case, the function that assigns \( x \) to each \( y \) is called the inverse function of \( y = f(x) \), and is denoted by \( x = f^{-1}(y) \). Alternatively, by interchanging \( x \) and \( y \), it can be written as \( y = f^{-1}(x) \).

 For the function \( y = f(x) \), if \( f(-x) = f(x) \), then \( f(x) \) is an even function. Similarly, if \( f(-x) = -f(x) \), then \( f(x) \) is an odd function.

 The product of two even functions is an even function. Also, the product of two odd functions is an even function. On the other hand, the product of an even function and an odd function is an odd function.

 If \( g (x) \) is an even function and \( h(x) \) is an odd function, then the following holds for the definite integral on any real interval \( \left[-M,M \right] \). \[ \begin{align} \int ^M _{-M} g(x) dx &= 2 \int ^M _{0} g(x) dx \\\\ \int ^M _{-M} h(x) dx &= 0 \end{align}\]
Function of several variables
 A function with two or more independent variables is called a function of several variables or multivariable function or multivariate function.

 Let \( n \) be a natural number. For an \( n \)-variable function \( z = f(x_1, x_2, \cdots, x_n) \), if for every \( \epsilon \gt 0 \) there exists a \( \delta \gt 0 \) such that for all points \( (x_1, x_2, \cdots, x_n) \) satisfying \[ \sqrt{\left( x_1 - a_1 \right)^2 + \left( x_2 - a_2 \right)^2 + \cdots + \left( x_n - a_n \right)^2} \lt \delta \] we have \[ \begin{align} \left| f (x_1 , x_2 , \cdots , x_n ) - b \right| \lt \epsilon \end{align}\] then we write \[ \lim _{( x_1 , x_2 , \cdots , x_n ) \to ( a_1 , a_2 , \cdots , a_n )} f (x_1 , x_2 , \cdots , x_n ) = b \] and say that \( z = f (x_1 , x_2 , \cdots , x_n ) \) converges to \( b \) as \( ( x_1 , x_2 , \cdots , x_n ) \to ( a_1 , a_2 , \cdots , a_n ) \). In this case, \( b \) is called the limit value.

 If \[ \lim _{( x_1 , x_2 , \cdots , x_n ) \to ( a_1 , a_2 , \cdots , a_n )} f (x_1 , x_2 , \cdots , x_n ) = f (a_1 , a_2 , \cdots , a_n ) \] holds, then the function \( z = f(x_1, x_2, \cdots, x_n) \) is said to be continuous at \( (x_1, x_2, \cdots, x_n) = (a_1, a_2, \cdots, a_n) \). A function that is continuous at every point in its domain is called a continuous function.

G
Graph
 For a function \( y = f(x) \), the set of all points \( (x, y) \) that satisfy this relationship is called the graph of the function \( y = f(x) \). Typically, the graph of a one-variable function is represented on the Cartesian-coordinate plane, centered at the origin \( O(0,0) \).

H
Hermitian matrix
 A square matrix \( A \) is called a Hermitian matrix if it satisfies \( A = A^{\dagger} \). In particular, a Hermitian matrix that is a real matrix is called a real symmetric matrix.

Hyperbola
 A hyperbola is a type of conic section. If the hyperbola has its center at \( (x_m, y_m) \) and intersects the \( x \)-axis, it is represented by the equation: \[ \frac{ \left( x - x_m \right) ^2 }{a^2} - \frac{\left( y - y_m \right) ^2 }{b^2} = 1 \ \ (a \gt 0, b \gt 0 )\] On the other hand, if the hyperbola intersects the \( y \)-axis, it is represented by the equation: \[ \frac{ \left( x - x_m \right) ^2 }{a^2} - \frac{\left( y - y_m \right) ^2 }{b^2} = -1 \ \ (a \gt 0, b \gt 0 )\] A hyperbola has asymptotes, given by: \[ y = \pm \frac{b}{a} x \]
I
Identity matrix
 An \( n \)th order square matrix in which all diagonal elements are 1 and all other elements are 0 is called the \( n \)th order identity matrix, denoted as \( E_n \) or simply \( E \). That is, \[ \begin{align} E_n = \left( \begin{array}{cccc} 1 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end{array} \right) \end{align}\]  For any \( m \times n \) matrix \( A \), the following properties hold: \[ \begin{align} & A E_n = A \\\\ & E_m A = A \end{align}\]  The \( n \) column vectors of the identity matrix \( E_n \) are called the column unit vectors with \( n \) elements: \[ \begin{align} \boldsymbol e_1 \ \mathrm = \left( \begin{array}{cccc} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right) , \ \boldsymbol e_2 \ \mathrm = \left( \begin{array}{cccc} 0 \\ 1 \\ \vdots \\ 0 \end{array} \right) , \ \ldots \ , \ \boldsymbol e_n \ \mathrm = \left( \begin{array}{cccc} 0 \\ 0 \\ \vdots \\ 1 \end{array} \right) \end{align}\]
Indefinite integral
 Let \( F(x) \) be the antiderivative of the function \( f(x) \). In this case, \[ F(x) + C \quad (C \text{ is an arbitrary constant}) \] is called the indefinite integral of \( f(x) \) and is expressed as \[ \int f(x) dx. \] Additionally, \( C \) is specifically referred to as the constant of integration. The process of finding the indefinite integral of a function \( f(x) \) is called integration. The following formulas hold for indefinite integrals: \[ \left\{ \int f(x)dx \right\} ' = f(x) \] \[ \int f'(x)dx = f(x) + C \] \[ \int cf(x)dx = c \int f(x)dx + C \ \ \left( c \ \text{ is a constant} \right) \] \[ \int \left\{ f(x) \pm g(x) \right\} dx = \int f(x)dx \pm \int g(x)dx + C \]
Integration by substitution
 If \( x = g(u) \) is differentiable, then the following formula holds: \[ \int f(x)dx = \int f(g(u))g'(u)du + C \] Integration by parts
 If both \( f(x) \) and \( g(x) \) are differentiable, then the following formula holds: \[ \int f(x)g'(x)dx = f(x)g(x) - \int f'(x)g(x)dx + C \]

Common indefinite integrals

Power functions \[ \begin{align} &\int x^{n} dx = \frac{1}{n+1} x^{n+1} + C \\\\ & \quad ( n \text{ is an integer except } -1, \ x \text{ is a real number}) \\\\ &\int \frac{1}{x} dx = \ln x + C \quad ( x > 0 ) \\\\ &\int x^{a} dx = \frac{1}{a+1} x^{a+1} + C \\\\ & \quad ( a \text{ is a real number except } -1, \ x \text{ is a positive real number} ) \end{align} \]
Trigonometric functions \[ \begin{align} &\int \sin x dx = - \cos x + C \quad (x \text{ is a real number})\\\\ &\int \cos x dx = \sin x + C \quad (x \text{ is a real number})\\\\ &\int \frac{1}{\cos ^2 x} dx = \tan x + C \\\\ & \quad (x \text{ is a real number except } \frac{(2n-1) \pi}{2}, \ n \text{ is an integer}) \end{align} \]
Exponential functions \[ \begin{align} &\int e^x dx = e^x + C \quad ( x \text{ is a real number} ) \\\\ &\int a^x dx = \frac{a^x}{\ln a} + C \\\\ & \quad ( x \text{ is a real number, } a \text{ is a positive real number except } 1 ) \end{align} \]
Inverse trigonometric functions \[ \begin{align} &\int \sin ^{-1} x dx = x \sin ^{-1} x + \sqrt{1-x^2} + C \ \ ( -1 \lt x \lt 1 ) \\\\ &\int \cos ^{-1} x dx = x \cos ^{-1} x - \sqrt{1-x^2} + C \ \ ( -1 \lt x \lt 1 ) \\\\ &\int \tan ^{-1} x dx = x \tan ^{-1} x - \frac{1}{2} \ln \left( 1 + x^2 \right) + C \ \ ( -\infty \lt x \lt \infty ) \end{align}\]
Inequality
 An expression where unknown variables, constants, and numbers are arranged appropriately and connected by inequality signs (\(\lt, \gt, \leqq, \geqq\)).
 For example, \( x + a \lt 3 \) is an inequality with respect to \( x \), where \( x \) is an unknown variable and \( a \) is a constant.

Infinity
 The state of being limitless. An infinitely large number is represented as \( \infty \). Therefore, an infinitely small number is represented as \( -\infty \).

Inner product
 Let \( \boldsymbol{x} \) and \( \boldsymbol{y} \) be column vectors with \( n \) elements, represented as follows: \[ \begin{align} \boldsymbol x \ \mathrm = \left( \begin{array}{cccc} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array} \right) , \ \ \ \ \boldsymbol y \ \mathrm = \left( \begin{array}{cccc} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{array} \right) \end{align}\] The quantity \( \left( \boldsymbol{x}, \boldsymbol{y} \right) \), defined by the following formula, is called the inner product of \( \boldsymbol{x} \) and \( \boldsymbol{y} \): \[ \begin{align} \left( \boldsymbol x, \boldsymbol y \right) = \sum _{i=1} ^n x_i y_i ^* \end{align}\]  The inner product satisfies the following properties, known as conjugate linearity: \[ \begin{align} & \left( \boldsymbol x_1 + \boldsymbol x_2 , \boldsymbol y \right) = \left( \boldsymbol x_1 , \boldsymbol y \right) + \left( \boldsymbol x_2 , \boldsymbol y \right) \\\\ & \left( \boldsymbol x , \boldsymbol y_1 + \boldsymbol y_2 \right) = \left( \boldsymbol x , \boldsymbol y_1 \right) + \left( \boldsymbol x , \boldsymbol y_2 \right) \\\\ & \left( c \boldsymbol x , \boldsymbol y \right) = c \left( \boldsymbol x , \boldsymbol y \right) \\\\ & \left( \boldsymbol x , c \boldsymbol y \right) = c^* \left( \boldsymbol x , \boldsymbol y \right) \\\\ & \left( \boldsymbol y , \boldsymbol x \right) = \left( \boldsymbol x , \boldsymbol y \right) ^* \end{align}\] where \( \boldsymbol{x}, \boldsymbol{x}_1, \boldsymbol{x}_2, \boldsymbol{y}, \boldsymbol{y}_1, \boldsymbol{y}_2 \) are all column vectors with \( n \) elements, and \( c \) is a complex number.

 For the inner product of a vector with itself, \( \left( \boldsymbol{x} , \boldsymbol{x} \right) \) is either zero or a positive real number: \[ \left( \boldsymbol{x} , \boldsymbol{x} \right) \geq 0 \] with equality holding if and only if all components of \( \boldsymbol{x} \) are zero. This property is called positivity of the inner product.

 If \[ \left( \boldsymbol{x} , \boldsymbol{y} \right) = 0 \] then \( \boldsymbol{x} \) and \( \boldsymbol{y} \) are said to be orthogonal.

Integer
 A number obtained by performing the operation of adding \( +1 \) or \( -1 \) to \( 0 \) any number of times is called an integer.
Example: \( \ldots, -3, -2, -1, 0, 1, 2, 3, \ldots \)

Intercept
 A point where the graph of a function \( y = f(x) \) intersects the \( x \)-axis or the \( y \)-axis is called an intercept.

Inverse matrix
 For an \( n \)th order matrix \( A \), if there exists a matrix \( X \) satisfying \[ \begin{align} XA = AX = E \end{align}\] then \( A \) is called a regular matrix (or invertible matrix). Additionally, such a matrix \( X \) is called the inverse matrix of \( A \) and is denoted as \( A^{-1} \). When \( A \) is a regular matrix, we also say that \( A \) is regular (or invertible).

Inverse proportion
 The variable \( y \) is a function of the variable \( x \) and can be expressed using a constant \( a \neq 0 \) as \[ y = \frac{a}{x} \] In this case, we say that \( y \) is inversely proportional to \( x \).

Inverse trigonometric functions
 When the domain of the function \( y = \sin x \) is restricted to \( \left[ -\pi/2, \pi/2 \right] \), its inverse function is written as \[ \begin{align} y = \sin ^{-1} x \ \ \left( -1 \leq x \leq 1 \right) \end{align}\] Similarly, when the domain of \( y = \cos x \) is restricted to \( \left[ 0, \pi \right] \), its inverse function is written as \[ \begin{align} y = \cos ^{-1} x \ \ \left( -1 \leq x \leq 1 \right) \end{align}\] Also, when the domain of \( y = \tan x \) is restricted to \( \left( -\pi/2, \pi/2 \right) \), its inverse function is written as \[ \begin{align} y = \tan ^{-1} x \ \ \left( - \infty \leq x \leq \infty \right) \end{align}\] The functions \( \sin^{-1} x \), \( \cos^{-1} x \), and \( \tan^{-1} x \) are called inverse trigonometric functions.

Irrational number
 A non-terminating and non-repeating decimal.

J
K
L
Lattice
 Let \( Z_1, Z_2, \dots, Z_n \) be sets of integers. The Cartesian product \[ Z = Z_1 \times Z_2 \times \dots \times Z_n \] is called an \( n \)-dimensional lattice, where each element \( z = \left( z_1 , z_2, \dots , z_n \right) \) belongs to \( Z \).

Limit
[1] For a function \( y = f(x) \), if the value of \( y \) approaches \( b \) infinitely closely as \( x \) approaches \( a \) without actually reaching \( a \), then we say that \( y = f(x) \) converges to \( b \) as \( x \to a \). In this case, \( b \) is called the limit, and it is expressed as: \[ \lim _{x \to a} f(x) = b \] [2] For a sequence \( \{ a_n \} \), if for any arbitrarily small positive number \( \epsilon \), there exists a number \( M \) such that for all terms beyond the \( M \)th term, the absolute difference between \( a_n \) and \( a \) is smaller than \( \epsilon \), then the sequence \( \{ a_n \} \) is said to converge, and its limit is \( a \). This is expressed as: \[ \lim _{n \to \infty} a_n = a \]
Linear combination
 For a sequence of column (or row) vectors with \( n \) elements \( \boldsymbol{a}_1 , \boldsymbol{a}_2 , \dots , \boldsymbol{a}_k \), a vector of the form \[ \begin{align} c_1 \boldsymbol a_1 + c_2 \boldsymbol a_2 + \cdots + c_k \boldsymbol a_k \end{align}\] where \( c_1, c_2, \dots, c_k \) are arbitrary constants, is called a linear combination of \( \boldsymbol{a}_1 , \boldsymbol{a}_2 , \dots , \boldsymbol{a}_k \). An equation of the form \[ \begin{align} c_1 \boldsymbol a_1 + c_2 \boldsymbol a_2 + \cdots + c_k \boldsymbol a_k = O \end{align}\] is called a linear relation among \( \boldsymbol{a}_1 , \boldsymbol{a}_2 , \dots , \boldsymbol{a}_k \). When \( c_1 = c_2 = \dots = c_k = 0 \), the equation always holds regardless of the choice of vectors, and this is called the trivial linear relation. If there exists a non-trivial linear relation among \( \boldsymbol{a}_1 , \boldsymbol{a}_2 , \dots , \boldsymbol{a}_k \), they are said to be linearly dependent. If no non-trivial linear relation exists, they are said to be linearly independent.

Logarithm
 The logarithm of \( b \) with base \( a \) is the exponent \( n \) that satisfies \( a^n = b \), expressed as: \[ \log _{a} b = n \] Here, \( b \) is called the antilogarithm (or argument). The base \( a \) must be a positive real number other than 1, and \( b \) must be a positive real number. The logarithm with base 10 is called the common logarithm, while the logarithm with base \( e = 2.71828 \dots \) is called the natural logarithm. The common logarithm is often denoted as \( \log b \), and the natural logarithm as \( \ln b \).
 For \( a > 0 \), \( a \neq 1 \), \( p > 0 \), \( q > 0 \), and any real number \( r \), the following logarithmic laws hold: \[ \begin{align} &[1]\ \log _{a} (pq) = \log _{a} p + \log _{a} q \\ &[2]\ \log _{a} \frac{1}{q} = - \log _{a} q \\ &[3]\ \log _{a} \frac{p}{q} = \log _{a} p - \log _{a} q \\ &[4]\ \log _{a} q^r = r \log _{a} q \\ \end{align} \]
Logarithmic function
 For \( a > 0 \), \( a \neq 1 \), and \( x > 0 \), the function \[ y = \log _a x \] is called the logarithmic function with base \( a \).

Logarithmic differentiation
 The method of differentiating after taking the natural logarithm of both sides of the function \( y = f(x) \). This approach utilizes the relationship: \[ \frac{d}{dx} \left( \ln y \right) = \frac{1}{y} \frac{dy}{dx} \]
M
Mathematical constant
 A constant that is considered important in mathematics. Examples include the circle ratio \( \pi \) and the base of the natural logarithm \( e \).

Mathematical formula
 A representation of symbols and characters arranged together and given quantitative meaning. It is also called an expression.

Mathematical induction
 Mathematical induction is a method for proving a propositional function \( P(n) \) that takes a natural number \( n \) as a variable. There are two forms:

First Form
 If the following two conditions are satisfied, then \( P(n) \) is true for all natural numbers:
[1] \( P(1) \) is true.
[2] For any natural number \( k \), if \( P(k) \) is assumed to be true, then \( P(k+1) \) is also true.

Second Form
If the following two conditions are satisfied, then \( P(n) \) is true for all natural numbers:
[1] \( P(1) \) is true.
[2] For any natural number \( k \), if \( P(i) \) is assumed to be true for all natural numbers \( i \) such that \( 1 \leq i \leq k \), then \( P(k+1) \) is also true.

Matrix
 Let \( m \) and \( n \) be arbitrary natural numbers. \( m \times n \) complex numbers \[ \begin{align} a_{ij} \ \ \ \left( i = 1,2, \ldots ,m \ ; \ \ j = 1,2, \ldots ,n \right) \end{align}\] arranged in a rectangular format with \( m \) rows and \( n \) columns is called an \( (m, n) \)-matrix, denoted as \[ \begin{align} A = \left( a_{ij} \right) = \left( \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{array} \right). \end{align}\]
 A \( (m,1) \)-matrix is called an column vector with \( m \) elements or an vertical vector with \( m \) elements. Similarly, a \( (1,n) \)-matrix is called an row vector with \( n \) elements or an horizontal vector with \( n \) elements.
 The \( m \times n \) complex numbers that make up a matrix are called its elements. In particular, the element \( a_{ij} \) located in the \( i \)th row from the top and the \( j \)th column from the left is called the \( \left( i, j \right) \) element. A horizontally arranged sequence of elements is called a row, while a vertically arranged sequence is called a column. Specifically, the \( i \)th row from the top is called the \( i \)th row, and the \( j \)th column from the left is called the \( j \)th column.
 Extracting only the \( j \)th column from an \( (m, n) \)-matrix \( A = \left( a_{ij} \right) \) is called the \( j \)th column vector of matrix \( A \). Using this notation, the column vectors of \( A \) can be represented as: \[ \begin{align} \boldsymbol a_1 \ \mathrm = \left( \begin{array}{cccc} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{array} \right) , \ \boldsymbol a_2 \ \mathrm = \left( \begin{array}{cccc} a_{12} \\ a_{22} \\ \vdots \\ a_{m2} \end{array} \right) , \ \ldots \ , \ \boldsymbol a_n \ \mathrm = \left( \begin{array}{cccc} a_{1n} \\ a_{2n} \\ \vdots \\ a_{mn} \end{array} \right) \end{align}\] In this form, matrix \( A \) can be written as: \[ \begin{align} A = \left( \boldsymbol a_1 \ \ \boldsymbol a_2 \ \ \cdots \ \ \boldsymbol a_n \right) \end{align}\] Similarly, extracting only the \( i \)th row from \( A \) is called the \( i \)th row vector of matrix \( A \). Using this notation, the row vectors of \( A \) can be represented as: \[ \begin{align} & \boldsymbol a_1 \ \mathrm = \left( a_{11} , a_{12} , \cdots , a_{1n} \right) \\\\ & \boldsymbol a_2 \ \mathrm = \left( a_{21} , a_{22} , \cdots , a_{2n} \right) \\\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots \\\\ & \boldsymbol a_m \ \mathrm = \left( a_{m1} , a_{m2} , \cdots , a_{mn} \right) \end{align}\] In this form, matrix \( A \) can be written as: \[ \begin{align} A = \left( \begin{array}{c} \boldsymbol a_1 \\ \boldsymbol a_2 \\ \vdots \\ \boldsymbol a_m \end{array} \right) \end{align}\]
 For two \( (m, n) \)-matrices \( A \) and \( B \), if all corresponding elements of \( A \) and \( B \) are equal, then \( A \) and \( B \) are said to be equal, which is denoted as \( A = B \). That is, given: \[ \begin{align} A = \left( a_{ij} \right) = \left( \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{array} \right) \end{align}\] \[ \begin{align} B = \left( b_{ij} \right) = \left( \begin{array}{cccc} b_{11} & b_{12} & \ldots & b_{1n} \\ b_{21} & b_{22} & \ldots & b_{2n} \\ \vdots & \vdots & & \vdots \\ b_{m1} & b_{m2} & \ldots & b_{mn} \end{array} \right) \end{align}\] then \( A = B \) if and only if: \[ a_{ij} = b_{ij} \quad \text{for all} \quad i = 1,2, \ldots ,m \quad \text{and} \quad j = 1,2, \ldots ,n. \]
Sum of matrices and constant multiplication
 For two \( (m, n) \)-matrices \( A \) and \( B \), the matrix formed by the sum of corresponding elements is called the sum of \( A \) and \( B \), denoted by \( A + B \). That is, \[ \begin{align} A = \left( \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{array} \right) \end{align}\] \[ \begin{align} B = \left( \begin{array}{cccc} b_{11} & b_{12} & \ldots & b_{1n} \\ b_{21} & b_{22} & \ldots & b_{2n} \\ \vdots & \vdots & & \vdots \\ b_{m1} & b_{m2} & \ldots & b_{mn} \end{array} \right) \end{align}\] Then, \[ \begin{align} A + B = \left( \begin{array}{cccc} a_{11} + b_{11} & a_{12} + b_{12} & \ldots & a_{1n} + b_{1n} \\ a_{21} + b_{21} & a_{22} + b_{22} & \ldots & a_{2n} + b_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} + b_{m1} & a_{m2} + b_{m2} & \ldots & a_{mn} + b_{mn} \end{array} \right) \end{align}\]  Additionally, for a complex number \( c \), the matrix obtained by multiplying each element of an \( (m, n) \)-matrix \( A \) by \( c \) is called the \( c \)-multiple of \( A \), denoted by \( cA \). That is, \[ \begin{align} cA = \left( \begin{array}{cccc} ca_{11} & ca_{12} & \ldots & ca_{1n} \\ ca_{21} & ca_{22} & \ldots & ca_{2n} \\ \vdots & \vdots & & \vdots \\ ca_{m1} & ca_{m2} & \ldots & ca_{mn} \end{array} \right) \end{align}\] In particular, \( (-1)A \) is represented as \( -A \). Furthermore, \( A + (-B) \) is represented as \( A - B \).
 The following properties hold for the sum of matrices and constant multiplication: \[ \begin{align} \left( A + B \right) + C &= A + \left( B + C \right) \\\\ A + B &= B + A \\\\ c \left( A + B \right) &= cA + cB \\\\ \left( c + d \right) A &= cA + dA \\\\ \left( cd \right) A &= c \left( dA \right) \\\\ \end{align}\] Matrix product
 Let \( A \) be an \( (l, m) \)-matrix and \( B \) be an \( (m, n) \)-matrix. Then, the matrices \( A \) and \( B \) are expressed as: \[ \begin{align} A = \left( \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1m} \\ a_{21} & a_{22} & \ldots & a_{2m} \\ \vdots & \vdots & & \vdots \\ a_{l1} & a_{l2} & \ldots & a_{lm} \end{array} \right) \end{align}\] \[ \begin{align} B = \left( \begin{array}{cccc} b_{11} & b_{12} & \ldots & b_{1n} \\ b_{21} & b_{22} & \ldots & b_{2n} \\ \vdots & \vdots & & \vdots \\ b_{m1} & b_{m2} & \ldots & b_{mn} \end{array} \right) \end{align}\] The product \( AB \) is defined as: \[ \begin{align} AB = \left( \begin{array}{cccc} \sum _{j = 1} ^m a_{1j} b_{j1} & \sum _{j = 1} ^m a_{1j} b_{j2} & \ldots & \sum _{j = 1} ^m a_{1j} b_{jn} \\ \sum _{j = 1} ^m a_{2j} b_{j1} & \sum _{j = 1} ^m a_{2j} b_{j2} & \ldots & \sum _{j = 1} ^m a_{2j} b_{jn} \\ \vdots & \vdots & & \vdots \\ \sum _{j = 1} ^m a_{lj} b_{j1} & \sum _{j = 1} ^m a_{lj} b_{j2} & \ldots & \sum _{j = 1} ^m a_{lj} b_{jn} \end{array} \right) \end{align}\] The following properties hold for matrix multiplication: \[ \begin{align} \left( A B \right) C &= A \left( B C \right) \\\\ A \left( B + C \right) &= AB + AC \\\\ \left( A + B \right) C &= AC + BC \\\\ c \left( AB \right) &= \left( cA \right) B = A \left( cB \right) \end{align}\]
Mean value theorem
[1] If a function \( y = f(x) \) is continuous on \( [a,b] \) and differentiable on \( (a,b) \), then there exists a point \( c \) with \( a \lt c \lt b \) such that \[ \begin{align} \frac{f(b)-f(a)}{b-a} = f'(c) \end{align}\] This is called the Mean Value Theorem.

[2] Let \( f \left( x,y \right) \) be a \( C^1 \)-function of two variables with domain \( D \). Suppose there exists \( \delta \gt 0 \) such that the interior of the circle centered at \( \alpha \left( a,b \right) \) with radius \( \delta \), \[ \begin{align} U \left( \alpha , \delta \right) = \left\{ \left( x,y \right) \ | \ \sqrt{ \left( x - a \right)^2 + \left( y - b \right)^2 } \lt \delta \right\} \end{align}\] is contained in \( D \), i.e. \( U \left( \alpha , \delta \right) \subset D \). Then, for each point \( \left( s,t \right) \) such that \( \left( a + s, b + t \right) \in U \left( \alpha , \delta \right) \), there exists \( \theta \) with \( 0 \lt \theta \lt 1 \) such that the following holds: \[ \begin{align} f \left( a + s, b + t \right) = f \left( a, b \right) + s \frac{\partial f}{\partial x} \left( a + \theta s , b + \theta t \right) + t \frac{\partial f}{\partial y} \left( a + \theta s , b + \theta t \right) \end{align}\]
Multinomial theorem
 Let \( n \) be a natural number. The following equation is called the Multinomial Theorem: \[ \left( a_1 + a_2 + \cdots + a_m \right) ^n = \sum _{p_i} \frac{n!}{n_1!n_2! \cdots n_m!} a_1 ^{n_1} a_2 ^{n_2} \cdots a_m ^{n_m} \] where \( p_i \) represents an ordered set of integers \( (n_1, n_2, \dots, n_m) \) satisfying \( n_1 \geq 0 \), \( n_2 \geq 0 \), …, \( n_m \geq 0 \), and \( \sum _{k=1} ^m n_k = n \). The summation on the right-hand side extends over all such sets \( p_i \). The coefficients of each term in the multinomial theorem are called multinomial coefficients.

Multiple-valued function
 A relationship where the value of a variable is determined in multiple ways depending on other variables, constants, and numerical values.

Multiplication
 The following formulas are fundamental for multiplication involving parentheses: \[ (a+b)^2 = a^2 + 2ab + b^2\] \[ (a-b)^2 = a^2 - 2ab + b^2\] \[ (a+b)(a-b) = a^2 - b^2 \] \[ (x+a)(x+b) = x^2 + (a+b)x + ab \] \[ (ax + b)(cx + d) = acx^2 + (ad + bc)x + bd \]
N
Natural numbers
 An integer greater than or equal to \(1\).
Example: \(1, 2, 3, \dots\)

Negative numbers
 A number smaller than \(0\).

Neighborhood
 In the complex plane, the interior of the circle with center \( \alpha \) and radius \( r \) is defined as \[ \begin{align} U \left( \alpha , r \right) = \left\{ z | \left| z - \alpha \right| \lt r \right\} \end{align}\] and is called an \( r \)-neighborhood of the point \( \alpha \). In particular, when the radius is not explicitly specified, it is simply called a neighborhood of \( \alpha \), and denoted by \( U(\alpha) \).

Newton's method of approximation
 A method for approximately finding \( x \) such that \( f(x) = 0 \) for a differentiable function of one variable \( f(x) \) over the real numbers. The method consists of the following steps:

[1] Choose an appropriate initial value \( x_0 \) such that \( f(x_0) \neq 0 \).

[2] Compute \( x_k \) iteratively using the following formula: \[ x_{k} = x_{k-1} - \frac{f \left( x_{k-1} \right) }{f' \left( x_{k-1} \right) } \] for \( k = 1,2,3, \dots \).

[3] Depending on the choice of the initial value, the sequence may satisfy: \[ \lim _{k \to \infty} f \left( x_k \right) = 0. \] However, in some cases, this condition may not hold.

Norm
 Let \( \boldsymbol{x} \) be a column vector with \( n \) elements. The non-negative square root of the inner product \( \left( \boldsymbol{x} , \boldsymbol{x} \right) \) is called the length or norm of \( \boldsymbol{x} \), denoted by \( \| \boldsymbol{x} \| \): \[ \begin{align} \| \boldsymbol{x} \| &= \sqrt{\left( \boldsymbol x , \boldsymbol x \right)} \\\\ &= \sqrt{\left| x_1 \right| ^2 + \left| x_2 \right| ^2 + \cdots + \left| x_n \right| ^2} \end{align}\] The norm satisfies the following inequalities: \[ \begin{align} & \left| \left( \boldsymbol{x} , \boldsymbol{y} \right) \right| \leq \| \boldsymbol{x} \| \| \boldsymbol{y} \| \quad \text{(Cauchy–Schwarz inequality)} \\\\ & \| \boldsymbol{x} + \boldsymbol{y} \| \leq \| \boldsymbol{x} \| + \| \boldsymbol{y} \| \quad \text{(Triangle inequality)} \end{align} \]
Normal matrix
 A square matrix \( A \) is called a normal matrix if it satisfies \( A^{\dagger} A = A A^{\dagger} \).

Numerical integration
 A method for obtaining approximate values of the solution to a differential equation using an appropriate approximation technique. The approximate values obtained through numerical integration are called numerical solutions. In numerical integration, it is necessary to divide the domain of the solution into appropriate intervals, and this interval width is called the step size.

O
Open set
 A subset \( D \) of the complex plane is called an open set if, for every point \( \alpha \in D \), there exists some \( r \gt 0 \) such that the \( r \)-neighborhood of \( \alpha \) satisfies \[ \begin{align} U \left( \alpha , r \right) \subset D \end{align}\]
Ordinary differential equation
 An equation that includes the derivative of an unknown function of one variable. To find a particular solution, both sides of the equation must be integrated indefinitely, and an initial condition— a known point \( (x,y) = (x_0 , y_0) \) included in the solution— must be provided to determine the integration constant.
Example: For the ordinary differential equation \[ \frac{dy}{dx} = k \quad (k \text{ is a constant}) \] integrating both sides gives \[ y = kx + C \quad (C \text{ is the integration constant}). \] If the initial condition \( (x,y) = (0,0) \) is given, then \( C = 0 \), so the solution is \[ y = kx. \]
Orthonormal set
 When the sequence of columun vectors with \( n \) elements \( \boldsymbol{e}_1, \boldsymbol{e}_2, \dots, \boldsymbol{e}_k \) are mutually orthogonal and each vector has a length of 1, they are said to form an orthonormal set.

P
Parabola
 A type of conic section. When the axis is parallel to the \( y \) -axis, it is represented by the following equation: \[ y - q = a (x - p)^2 \quad (a \neq 0, \ p \text{ and } q \text{ are arbitrary real numbers})\] When \( a \gt 0 \), the graph is concave upwards, and when \( a \lt 0 \), the graph is concave downwards. The vertex of the parabola is at \( (p, q) \), and the axis of symmetry is the line \( x = p \).
 When the axis is parallel to the \( x \) -axis, it is represented by the following equation: \[ x - q = a (y - p)^2 \quad (a \neq 0, \ p \text{ and } q \text{ are arbitrary real numbers})\] When \( a \gt 0 \), the graph is concave to the right, and when \( a \lt 0 \), the graph is concave to the left. The vertex of the parabola is at \( (q, p) \), and the axis of symmetry is the line \( y = p \).

Partial derivative
 Let \( \mathbb{R} \) denote the set of all real numbers. For a natural number \( n \), the \( n \)-fold Cartesian product of \( \mathbb{R} \) is denoted by \( \mathbb{R}^n \). For an \( n \)-variable function \( f(x_1, x_2, \ldots, x_n) \) with variables in \( \mathbb{R}^n \), if the following limit exists: \[ \lim _{h \to 0} \frac{f \left( p_1, p_2, \ldots , p_k + h , \ldots , p_n \right) - f \left( p_1, p_2, \ldots ,p_k, \ldots , p_n \right) }{h}\] then \( f(x_1, x_2, \ldots, x_n) \) is said to be partially differentiable with respect to \( x_k \) at the point \( (p_1, p_2, \ldots, p_n) \). The value of this limit is called the partial derivative of \( f \) with respect to \( x_k \) at the point \( (p_1, p_2, \ldots, p_n) \) and is denoted by: \[ \frac{\partial f}{\partial x_k} \left( p_1, p_2, \ldots , p_n \right) \] When \( f(x_1, x_2, \ldots, x_n) \) is partially differentiable with respect to all \( x_k \) \((k = 1, 2, \ldots, n)\) at the point \( (p_1, p_2, \ldots, p_n) \), the function \( f \) is simply said to be partially differentiable.

Partial derivative function
 Let \( \mathbb{R} \) denote the set of all real numbers. For a natural number \( n \), the \( n \)-fold Cartesian product of \( \mathbb{R} \) is denoted by \( \mathbb{R}^n \). For an \( n \)-variable function \( f(x_1, x_2, \ldots, x_n) \) with variables in \( \mathbb{R}^n \), if the function is partially differentiable with respect to \( x_k \) at every element of a certain set \( D \subset \mathbb{R}^n \), then the function that assigns to each element \( (x_1, x_2, \ldots, x_n) \) its partial derivative with respect to \( x_k \) is called the partial derivative function of \( f(x_1, x_2, \ldots, x_n) \) with respect to \( x_k \). This is denoted by \[ \frac{\partial f}{\partial x_k}.\]

Higher-order partial derivatives
 Let \( i \) and \( j \) be any natural numbers not exceeding \( n \). For the partial derivative function of an \( n \)-variable function \( f(x_1, x_2, \ldots, x_n) \) with respect to \( x_i \), the partial derivative of this function with respect to \( x_j \) is represented as follows: \[ \frac{\partial}{\partial x_j} \left( \frac{\partial f}{\partial x_i} \right) = \frac{\partial^2 f}{\partial x_j \partial x_i}.\] In the case where \( i = j \), the following notation can also be used: \[ \frac{\partial}{\partial x_i} \left( \frac{\partial f}{\partial x_i} \right) = \frac{\partial^2 f}{\partial x_i^2}.\] These are called second-order partial derivative functions.
 Let \( i \), \( j \), and \( k \) be any natural numbers not exceeding \( n \). For the partial derivative of an \( n \)-variable function \( f(x_1, x_2, \ldots, x_n) \) with respect to \( x_i \), and then with respect to \( x_j \), the partial derivative of this function with respect to \( x_k \) is represented by the following formula: \[ \frac{\partial}{\partial x_k} \left( \frac{\partial^2 f}{\partial x_j \partial x_i} \right) = \frac{\partial^3 f}{\partial x_k \partial x_j \partial x_i}.\] When \( i = j \), the following notation may also be used: \[ \frac{\partial}{\partial x_k} \left( \frac{\partial^2 f}{\partial x_i \partial x_i} \right) = \frac{\partial^3 f}{\partial x_k \partial x_i^2}.\] When \( j = k \), the following notation may also be used: \[ \frac{\partial}{\partial x_j} \left( \frac{\partial^2 f}{\partial x_j \partial x_i} \right) = \frac{\partial^3 f}{\partial x_j^2 \partial x_i}.\] When \( i = j = k \), the following notation may also be used: \[ \frac{\partial}{\partial x_i} \left( \frac{\partial^2 f}{\partial x_i \partial x_i} \right) = \frac{\partial^3 f}{\partial x_i^3}.\] These are called third-order partial derivative functions. The same applies to partial derivative functions of the fourth order and higher.

 Let \( m \) be a natural number. For a function of \( n \) variables \( f(x_1, x_2, \ldots, x_n) \), where the variables are elements of \( \mathbb{R}^n \), if all partial derivatives of order \( m \) are continuous, the function \( f(x_1, x_2, \ldots, x_n) \) is called a \( \boldsymbol C^m \)-function. Furthermore, if for every natural number \( l \), all partial derivatives of order \( l \) are continuous, then the function \( f(x_1, x_2, \ldots, x_n) \) is called a \( \boldsymbol C^{\infty} \)-function.

Partial differential equation
 An equation that includes partial derivatives of an unknown function is called a partial differential equation (PDE). Problems that involve solving a PDE with specified conditions, such as the value of the unknown function at time \( t = 0 \) (called the initial condition) or conditions related to the unknown function at the boundary of the spatial region (called boundary conditions), are collectively referred to as boundary value problems.

Partial fraction decomposition
 Expressing a single fraction as the sum of two or more fractions. For real numbers \( a \), \( b \), \( c \), \( d \), and \( x \), the following holds: \[ \begin{align} &[1] \ \ \ \ \frac{1}{ab} = \frac{1}{a+b} \left( \frac{1}{a} + \frac{1}{b} \right) = \frac{1}{a \left( a + b \right) } + \frac{1}{b \left( a + b \right) } \\\\ &[2] \ \ \ \ \frac{1}{\left( ax + b \right) \left( cx + d \right) } = \frac{1}{ad - bc} \left( \frac{a}{ax + b} - \frac{c}{cx + d} \right) \end{align}\]
Periodic function
 A function of one variable \( f(x) \) is called a periodic function with period \( a \) (where \( a \) is a positive constant) if it satisfies the equation \[ f(x+a) = f(x) \] for all \( x \).

Permutation
 Arranging \( r \) objects in a row, selected from \( n \) distinct objects, is called a permutation of \( r \) objects from \( n \), and the total number of such arrangements is denoted by \( _n \rm P _ \it r \). The total number of permutations is given by the following formula: \[ _n \rm P _ \it r \rm = \it n \rm( \it n \rm -1)( \it n \rm -2) \cdots ( \it n \rm - \it r \rm + 1) = \frac{\it n \rm !}{\rm \left( \it n \rm - \it r \rm \right) !}\]
Permutations with repetitions
 If there are \( n \) objects consisting of \( m \) different types of characters, where the number of occurrences of character \( L_i \) is \( n_i \), then the total number of ways to arrange them in a row is given by: \[ \frac{n!}{n_1 ! n_2 ! \cdots n_m !} \] where \[ n = \sum _{i=1} ^m n_i \]
Permutation (algebra)
 The operation of rearranging the numbers \( 1,2, \dots , n \) is called a permutation. There are \( n! \) different permutations of \( n \) elements.

 For a permutation \( \sigma \) of \( n \) elements, if the \( i \)-th element is moved to the \( j \)-th position, this is expressed as \[ \begin{align} \sigma \left( i \right) = j. \end{align}\] If \[ \begin{align} \sigma \left( 1 \right) = i_1 , \ \ \sigma \left( 2 \right) = i_2 , \ \ldots \ , \ \ \sigma \left( n \right) = i_n , \end{align}\] then \( \sigma \) is written in two-line notation as \[ \begin{align} \sigma = \left( \begin{array}{cccc} 1 & 2 & \ldots & n \\ i_1 & i_2 & \ldots & i_n \\ \end{array} \right). \end{align}\]
 The identity permutation, which leaves all elements unchanged, is \[ \begin{align} \sigma = \left( \begin{array}{cccc} 1 & 2 & \ldots & n \\ 1 & 2 & \ldots & n \\ \end{array} \right), \end{align}\] and is denoted by \( 1_n \).

 The inverse permutation of \( \sigma \), denoted \( \sigma^{-1} \), reverses the effect of \( \sigma \). That is, if \[ \begin{align} \sigma = \left( \begin{array}{cccc} 1 & 2 & \ldots & n \\ i_1 & i_2 & \ldots & i_n \\ \end{array} \right), \end{align}\] then its inverse is \[ \begin{align} \sigma ^{-1} = \left( \begin{array}{cccc} i_1 & i_2 & \ldots & i_n \\ 1 & 2 & \ldots & n \\ \end{array} \right). \end{align}\]
 The operation of applying two permutations \( \sigma \) and \( \tau \) in succession is called the product of \( \sigma \) and \( \tau \), denoted \( \tau \sigma \). If \[ \begin{align} \sigma = \left( \begin{array}{cccc} 1 & 2 & \ldots & n \\ i_1 & i_2 & \ldots & i_n \\ \end{array} \right) , \ \ \ \tau = \left( \begin{array}{cccc} i_1 & i_2 & \ldots & i_n \\ j_1 & j_2 & \ldots & j_n \\ \end{array} \right), \end{align}\] then their product is \[ \begin{align} \tau \sigma = \left( \begin{array}{cccc} 1 & 2 & \ldots & n \\ j_1 & j_2 & \ldots & j_n \\ \end{array} \right). \end{align}\]
 A permutation of \( n \) elements that swaps only two elements is called a transposition. A permutation \( \sigma \) is called an even permutation if it can be expressed as the product of an even number of transpositions, and an odd permutation if it can be expressed as the product of an odd number of transpositions. The sign of a permutation \( \sigma \), denoted \( \operatorname{sgn} \sigma \), is defined as: \[ \operatorname{sgn} \sigma = 1 \quad \text{if \( \sigma \) is even,} \] \[ \operatorname{sgn} \sigma = -1 \quad \text{if \( \sigma \) is odd.} \]
Perpendicular
 Let there be a line \( l \) and a point \( A \) that does not lie on \( l \). Draw a segment from \( A \) to a point \( H \) on \( l \). If the angle at \( H \) is a right angle, then the segment \( AH \) is called the perpendicular dropped from \( A \) to \( l \).

Phase plane
 Consider two differential equations for \( x = x(t) \) and \( y = y(t) \): \[ \begin{align} \frac{dx}{dt} &= F(x,y) \\\\ \frac{dy}{dt} &= G(x,y) \end{align}\] The solution curves plotted in the \( xy \)-plane as \( t \) varies are called trajectories, and this \( xy \)-plane is referred to as the phase plane. Additionally, a point \( (x_0, y_0) \) satisfying \[ F(x,y) = G(x,y) = 0 \] is called an equilibrium point.

Pi
 A mathematical constant, expressed using \( \pi \), which is an irrational number with a value of \( 3.141592 \ldots \).

piecewise-continuous
 A function of one variable \( f (x) \) is said to be piecewise-continuous on a finite real interval if it has only a finite number of discontinuities within that interval. At a discontinuous point \( x \), the right-hand and left-hand limits are expressed as follows: \[ \begin{align} f \left( x + 0 \right) &= \lim _{\epsilon \to 0} f \left( x + \epsilon \right) \ \ \ \ \left( \epsilon \gt 0 \right) \\\\ f \left( x - 0 \right) &= \lim _{\epsilon \to 0} f \left( x - \epsilon \right) \ \ \ \ \left( \epsilon \gt 0 \right) \end{align}\]
Point
[1] In a mathematical expression containing variables, one possible value that a variable can take.

[2] The coordinates in the Cartesian-coordinate plane.

[3] A point as a geometric figure on a plane.

Polynomial
[1] An expression consisting of the sum and product of multiple terms.

[2] The set of all complex numbers is denoted by \( \mathbb{C} \). A mathematical expression formed with a one variable \( x \) and coefficients \( a_0, a_1, \dots, a_n \) from \( \mathbb{C} \) is called a polynomial with complex coefficients in the variable \( x \): \[ \sum _{i=0} ^n a_i x^i = a_0 + a_1 x + \cdots + a_n x^n \]
Positive numbers
 A number greater than \( 0 \).

Power
 An expression formed by multiplying the same variable, constant, or number repeatedly. The number of multiplications is written as a superscript to the right of the base. For example: \[ a \times a \times a = a^3\] The variable, constant, or number being multiplied is called the base, and the number of times it is multiplied is called the exponent. The process of applying a power to a base, resulting in repeated multiplication of the base, is called exponentiation.

 When \( a \) is a positive real number, \( n \) is a natural number, and \( m \) is an integer, the powers of \( a \) are defined as follows:

[1] \( a^0 = 1 \)
[2] \( a^n \) is equal to multiplying \( a \) by itself \( n \) times.
[3] \( a^{-n} \) is equal to multiplying \( \frac{1}{a} \) by itself \( n \) times.
[4] \( a^{\frac{m}{n}} \) is the \( n \)th root of \( a^m \), i.e., the number that, when multiplied by itself \( n \) times, gives \( a^m \). This is expressed as: \[ a^{\frac{m}{n}} = \sqrt[n]{a^m}\] Particularly, when \( n = 2 \), it may be simplified to: \[ a^{\frac{m}{2}} = \sqrt{a^m}\] [5] When \( p \) is an irrational number and \( \{ p_l \} \) is a sequence of rational numbers with: \[ \lim _{l \to \infty} p_l = p\] then the power of \( a \) raised to the irrational exponent \( p \) is defined as: \[ a^p = \lim _{l \to \infty} a^{p_l}\] In calculations involving exponents, the following laws of exponents hold true for positive real numbers \( a \) and \( b \), and real numbers \( p \) and \( q \): \[ \begin{align} &[1]\ a^p a^q = a^{p+q} \\ &[2]\ \left( a^p \right) ^q = a^{pq} \\ &[3]\ (ab)^p = a^p b^p \\ &[4]\ \frac{a^p}{a^q} = a^{p-q} \\ &[5]\ \left( \frac{1}{a^p} \right) ^q = \frac{1}{a^{pq}} \\ &[6]\ \left( \frac{a}{b} \right) ^p = \frac{a^p}{b^p} \end{align} \]
Probability
 \( \Omega \) is a set consisting of elements called elementary events, and this set is referred to as the sample space. The set \( \mathfrak{F} \), known as the event space, is a collection of subsets of \( \Omega \), and its elements are called events. A triplet \( \left( \Omega, \mathfrak{F}, P \right) \) that satisfies the following axioms of probability [A1]–[A6] is called a probability space.

Axioms of Probability:
[A1] The union, difference, and intersection of any two elements in \( \mathfrak{F} \) are also contained in \( \mathfrak{F} \).
[A2] \( \Omega \in \mathfrak{F} \).
[A3] For any element \( A \in \mathfrak{F} \), a non-negative real number \( P(A) \), called the probability of the event \( A \), is assigned.
[A4] \( P(\Omega) = 1 \).
[A5] If two elements \( A \) and \( B \) in \( \mathfrak{F} \) are disjoint, then \[ P \left( A \cup B \right) = P \left( A \right) + P \left( B \right) \] In this case, \( A \) and \( B \) are said to be mutually exclusive.
[A6] For any decreasing sequence of elements in \( \mathfrak{F} \), \[ A_1 \supset A_2 \supset \cdots \supset A_n \supset \cdots \] if \[ \bigcap _{i=1} ^{\infty} A_i = \varnothing \] then \[ \lim _{i \to \infty} P \left( A_i \right) = 0 \]
 From the axioms of probability, the following six propositions hold:
[1] If \( A \subset B \), then \( P(A) \leq P(B) \).
[2] \( P \left( \varnothing \right) = 0 \)
[3] \( P \left( A^c \right) = P \left( \Omega - A \right) = 1 - P \left( A \right) \)
Here, \( A^c \) is called the complementary event.
[4] \( 0 \leq P \left( A \right) \leq 1 \)
[5] \( P \left( A \cup B \right) = P \left( A \right) + P \left( B \right) - P \left( A \cap B \right) \)
[6] If a sequence of sets \( A_1, A_2, \dots, A_n, \dots \) in \( \mathfrak{F} \) are mutually exclusive, then \[ P \left( \bigcup _{i=1} ^{\infty} A_i \right) = \sum _{i=1} ^{\infty} P \left( A_i \right).\]
Product of probability spaces

Case 1: Finite sample spaces
 Consider two probability spaces with finite sample spaces: \[ \left( \Omega_1 , \mathcal{P}(\Omega_1), P_1 \right), \quad \left( \Omega_2 , \mathcal{P}(\Omega_2), P_2 \right). \] The product of probability space \( (\Omega, \mathfrak{F}, P) \) is defined as follows:
Sample space: \[ \Omega = \Omega_1 \times \Omega_2. \] Event space: \[ \mathfrak{F} = \mathcal{P}(\Omega). \] Probability measure: Defined by two conditions:
1. For any \( e \in \Omega_1 \), \( g \in \Omega_2 \), the probability of \( \{(e, g)\} \) is given by: \[ P\left(\{(e, g)\}\right) = P_1(\{e\}) \cdot P_2(\{g\}). \] 2. If \( A, B \in \mathfrak{F} \) are disjoint, then: \[ P(A \cup B) = P(A) + P(B). \] The probability space \( (\Omega, \mathfrak{F}, P) \) defined in this manner is a valid probability space.

Case 2: Infinite sample spaces
 For two probability spaces with infinite sample spaces: \[ \left( \Omega_1 , \mathcal{P}(\Omega_1), P_1 \right), \quad \left( \Omega_2 , \mathcal{P}(\Omega_2), P_2 \right),\] where the sample spaces are given by: \[ \Omega_1 = [a, b], \quad \Omega_2 = [c, d], \quad -\infty \leq a \leq b \leq \infty, \quad -\infty \leq c \leq d \leq \infty.\] For any interval \( [\alpha, \beta] \subseteq [a, b] \), the probability measure is given by: \[ P_1([\alpha, \beta]) = \int_{\alpha}^{\beta} f(x)dx,\] where \( f(x) \) satisfies: \[ f(x) \geq 0 \quad \text{for } x \in [a, b], \quad \int_a^b f(x)dx = 1.\] Similarly, for \( [\gamma, \delta] \subseteq [c, d] \), \[ P_2([\gamma, \delta]) = \int_{\gamma}^{\delta} g(y)dy,\] where \( g(y) \) satisfies: \[ g(y) \geq 0 \quad \text{for } y \in [c, d], \quad \int_c^d g(y)dy = 1.\] The product of probability space \( (\Omega, \mathfrak{F}, P) \) is defined as:
Sample space: \[ \Omega = \left\{ ([\alpha, \beta], [\gamma, \delta]) \mid a \leq \alpha \leq \beta \leq b, c \leq \gamma \leq \delta \leq d \right\}.\] Event space: \[ \mathfrak{F} = \mathcal{P}(\Omega).\] Probability measure: Defined by two conditions:
1. For \( e = [\alpha, \beta] \), \( g = [\gamma, \delta] \), \[ \begin{align} P \left( \left( e, \ g \right) \right) &= P_1 \left( e \right) \cdot P_2 \left( g \right) \\\\ &= \left( \int _{\alpha} ^{\beta} f \left( x \right) dx \right) \cdot \left( \int _{\gamma} ^{\delta} g \left( y \right) dy \right) \\\\ &= \int _{\alpha} ^{\beta} \left( \int _{\gamma} ^{\delta} f \left( x \right) g \left( y \right) dy \right) dx \end{align}\] 2. If \( A, B \in \mathfrak{F} \) are disjoint, then: \[ P(A \cup B) = P(A) + P(B).\] Thus, the product probability space \( (\Omega, \mathfrak{F}, P) \) is a valid probability space.

Probability distribution
 In a probability space \( \left( \Omega , \mathfrak{F} , P \right) \), a variable \( X \) that takes on a unique value for each elementary event is called a random variable. The relationship between the values that the random variable \( X \) can take and the probability associated with each corresponding elementary event is called a probability distribution. The probability that \( X \) takes a specific value \( a \) is expressed as \( P(X = a) \), and the probability that \( X \) falls within the real interval \([a, b]\) is written as \( P(a \leq X \leq b) \).

Case 1: Finite sample space
 If the sample space \( \Omega \) consists of a finite number of elements, say \( e_1, e_2, \dots, e_n \), and each of these elementary events corresponds to a specific random variable value \( x_1, x_2, \dots, x_n \), the following conditions hold: \[ P \left( X = x_i \right) = P \left( e_i \right) \geq 0 \ \ \ \left( i=1,\ 2,\cdots ,\ n \right) \] \[ \sum _{i=1} ^n P \left( X = x_i \right) = \sum _{i=1} ^n P \left( e_i \right) = 1\] Case 2: Infinite sample space
 When the sample space \( \Omega \) is infinite, the elementary event \( e \in \Omega \) takes real values satisfying \( -\infty \leq a \leq e \leq b \leq \infty \). In such cases, the random variable \( X \) is defined as \( X = e \), and we use a probability density function (PDF) \( f(x) \) to describe the probability distribution. The function \( f(x) \) must satisfy the following conditions: \[ f(x) \geq 0 \quad \text{for all } x \in [a, b] \] \[ \int_{a}^{b} f(x) \, dx = 1 \] Using the probability density function, the probability distribution is expressed as: \[ P(X = c) = \int_{c}^{c} f(x) \, dx = 0 \quad \text{for any } c \in [a, b]\] \[ P(\alpha \leq X \leq \beta) = \int_{\alpha}^{\beta} f(x) \, dx \quad \text{where } a \leq \alpha \leq \beta \leq b\]
Product
 The result of multiplication. In the product of variables, symbols, or numbers, the multiplication sign \( \times \) is often omitted. Additionally, the symbol \( \cdot \) is sometimes used instead of \( \times \) for multiplication.
Examples: \( 2 \times a = 2a \), \( x \times y = xy \), \( 2 \cdot 3 = 2 \times 3 = 6 \).

Product of the sequence
 Let \( n \lt m \). The product of the sequence \( \{ a_k \} \) from \( a_n \) to \( a_m \) is represented as \[ \prod _{k=n} ^m a_k \] That is, \[ \prod _{k=n} ^m a_k = a_n \times a_{n+1} \times \cdots \times a_{m-1} \times a_m .\]
Proportion
 When a variable \( y \) is a function of a variable \( x \) and can be expressed using a certain constant \( a \neq 0 \) as \( y = ax \), \( y \) is said to be proportional to \( x \). The constant \( a \) is called the proportionality constant or the rate of change.

Proposition
 A statement or equation whose truth value is determined. Important propositions in discussions are also called theorems.

Propositional function
 A proposition \( P(x) \) whose truth value depends on the variable \( x \) is called a propositional function.

Q
Quotient
 The result of division.

R
Radian
 In a circle with a radius of 1, the central angle corresponding to an arc length of \( \theta \) is called radian. The unit of radian is often omitted when written. The method of expressing angles using radians is called radian measure. The relationship between radians and degrees \( (^{\circ}) \) is given by the following equation: \[ \pi \text{ radians} = 180^\circ\]  In radian measure, angles are extended to the entire set of real numbers as follows:
 First, consider a circle with radius 1 centered at the origin \( O \) in the Cartesian plane. This circle is called the unit circle. Let point \( A(1,0) \) be the starting point, and suppose that point \( P \) moves along the circumference of the unit circle. The angle \( \theta \) is determined as follows based on the movement of point \( P \):

[1] If point \( P \) moves counterclockwise along the unit circle, define \( \theta = \angle POA \ \ \left( 0 \leq \angle POA \lt 2 \pi \right) \). If point \( P \) moves counterclockwise by \( n \) or more full circles, then define \( \theta = \angle POA + 2n \pi \), where \( n \) is any natural number.

[2] If point \( P \) moves clockwise along the unit circle, define \( \theta = - \angle POA \ \ \left( 0 \leq \angle POA \lt 2 \pi \right) \). If point \( P \) moves clockwise by \( n \) or more full circles, then define \( \theta = - \angle POA - 2n \pi \), where \( n \) is any natural number.

Radical symbol
 The square root symbol \( \sqrt{} \).

Random numbers
 A number chosen randomly is called a random number.

Rank
 Any \( (m, n) \) matrix \( A \) can be transformed into the following standard form \( F_{m, n}(r) \) by applying a series of elementary row and column operations: \[ \begin{align} F_{m, \ n} \left( r \right) &= \left( \begin{array}{cc} E_{r} & O_{r, \ n-r} \\ O_{m-r, \ r} & O_{m-r, \ n-r} \end{array} \right) \end{align}\] Here, \( r \) is a number determined solely by the matrix \( A \), and it is called the rank of the matrix \( A \).

Ratio
 When there are two numbers \( a \) and \( b \), the expression \( a : b = a \div b = \frac{a}{b} \) is called the ratio of \( a \) to \( b \).

Rational number
 A number that can be expressed as \( \frac{m}{n} \) using any integers \( n \) and \( m \) is called a rational number.

Ray
 The set of all points on one side of a point \( O \) on a straight line. The point \( O \) is called the origin of this ray.

Real interval
 A method for representing a range of real numbers. Using an inequality involving a variable \( x \) and constants \( a \) and \( b \), it can be classified as follows.
\( [a,b] \) : \( a \leqq x \leqq b \)
\( (a,b] \) : \( a \lt x \leqq b \)
\( [a,b) \) : \( a \leqq x \lt b \)
\( (a,b) \) : \( a \lt x \lt b \)

Real matrix
 A matrix whose elements are all real numbers is called a real matrix. A real matrix whose elements are all positive real numbers is called a positive matrix, and a real matrix whose elements are all 0 or non-negative real numbers is called a non-negative matrix.

Real numbers
 The collective term for rational and irrational numbers is called a real number.

Reciprocal
 For a number \( x \), a number \( a \) that satisfies \( xa = 1 \) is called the reciprocal of \( x \), and it is represented as \( a = \frac{1}{x} \).

Region
 A subset \( D \) of the complex plane is called path-connected if any two points in \( D \) can be joined by a continuous curve lying entirely within \( D \). An open, path-connected set is called a region.

Right-angled triangle
 A triangle with one of its interior angles being a right angle is called a right-angled triangle. The side opposite the right angle is called the hypotenuse.

Rolle's theorem
 If a function \( f(x) \) is continuous on \( [a,b] \), differentiable on \( (a,b) \), and satisfies \( f(a) = f(b) \), then there exists some \( c \) with \( a \lt c \lt b \) such that \( f'(c) = 0 \). This is called Rolle’s theorem.

Rounding error
 The error that arises when the digits of a number are dropped due to rounding off when calculating an approximation of a number is called rounding error.

Runge-Kutta methods
 One of the numerical integration methods for ordinary differential equations. Let the unknown function be \( y = f(x) \), and its derivative \( y' = g(x, y) \) be known. In this case, with the step size \( h \) and initial values \( (x_0, y_0) \), the numerical solution \( (x_n, y_n) \) (where \( n \) is a natural number) is determined as follows. \[ \begin{align} x_n &= x_{n-1} + h \\\\ y_n &= y_{n-1} + \frac{1}{6} h \left( k_1 + 2 k_2 + 2 k_3 + k_4 \right), \\\\ \end{align}\] where \[ \begin{align} k_1 &= g \left( x_{n-1} , y_{n-1} \right) \\\\ k_2 &= g \left( x_{n-1} + \frac{1}{2} h , y_{n-1} + \frac{1}{2} h k_1 \right) \\\\ k_3 &= g \left( x_{n-1} + \frac{1}{2} h , y_{n-1} + \frac{1}{2} h k_2 \right) \\\\ k_4 &= g \left( x_{n-1} + h , y_{n-1} + h k_3 \right). \end{align}\]
S
Separation of variables
 This refers to a technique for solving the ordinary differential equation \[ \frac{dy}{dx} = f(y) g(x)\] By dividing both sides by \( f(y) \), the equation is rewritten in a separable form. Then, both sides are integrated with respect to \( x \). The solution is determined by using the initial condition to find the integration constant.

Sequence of numbers
 A sequence of numbers arranged according to a certain rule is called a sequence. Each number that constitutes a sequence is called a term. A sequence with a finite number of terms is called a finite sequence, while a sequence with an infinite number of terms is called an infinite sequence. A sequence is generally represented as: \[ a_1, \ a_2,\ a_3, \ldots ,\ a_n, \ldots \] In this case, \( a_1 \) is called the first term. In the case of a finite sequence, the last term of the sequence is called the last term. A sequence can also be expressed as \( \{ a_n \} \). An equation that expresses each term of a sequence using the term number \( n \) is called the general term.

Set
 A well-defined "collection of objects" is called a set. Each individual object that forms a set is called an element or member of the set. When \( x \) is an element of set \( X \), it is written as: \[ x \in X \quad \text{or} \quad X \ni x \] This can also be expressed as "\( x \) belongs to \( X \)", "\( x \) is contained in \( X \)", or "\( X \) contains \( x \)". On the other hand, when \( x \) is not an element of \( X \), it is written as: \[ x \notin X \quad \text{or} \quad X \not\ni x\]
 When all elements of a set can be listed as \( a, b, c, \ldots \), the set is represented by the notation \[ \{ a, b, c, \ldots \}.\] This is called the extensional notation of a set. On the other hand, when a certain condition \( C(x) \) is given, the set of all \( x \) that satisfy this condition is represented by the notation \[ \{ x \mid C(x) \}.\] This notation is called the intensional notation of a set.
 A set with a finite number of elements is called a finite set, while a set with infinitely many elements is called an infinite set.
 A set that contains no elements is called the empty set, represented by the symbol \[ \varnothing.\]
 Let \( X \) and \( Y \) be two sets. If every element of \( X \) is an element of \( Y \), and every element of \( Y \) is an element of \( X \), then \( X \) and \( Y \) are said to be equal, represented as \[ X = Y.\]  If every element of set \( X \) is also an element of set \( Y \), then \( X \) is called a subset of \( Y \), or \( X \) is said to be contained in \( Y \). This is represented as \[ X \subset Y \quad \text{or} \quad Y \supset X.\] This notation allows for the possibility that \( X = Y \). The empty set \( \varnothing \) is considered a subset of any set.
 If \( X \) is a subset of \( Y \) and \( X \neq Y \), then \( X \) is called a proper subset of \( Y \).
 The set of all elements that are in at least one of \( X \) or \( Y \) is called the union of \( X \) and \( Y \), denoted by \[ X \cup Y.\] On the other hand, the set of all elements that are in both \( X \) and \( Y \) is called the intersection of \( X \) and \( Y \), denoted by \[ X \cap Y.\] If \( X \cap Y = \varnothing \), then \( X \) and \( Y \) are said to be disjoint. If \( X \cap Y \neq \varnothing \), then \( X \) and \( Y \) are said to intersect.

 A family of sets refers to a collection of sets. Let \( \Omega \) be a family of sets. The set of all elements that belong to at least one set in \( \Omega \) is called the union of the family of sets \( \Omega \), and it is denoted by \[ \bigcup_{X \in \Omega} X.\] Similarly, the set of all elements that are common to all sets in \( \Omega \) is called the intersection of the family of sets \( \Omega \), and it is denoted by \[ \bigcap_{X \in \Omega} X.\]
 A family of sets \( \{ X_i \}_{i \in I} \) is a collection of sets where each element \( i \) in the index set \( I \) corresponds to one set \( X_i \). The union of this family of sets is denoted by \[ \bigcup_{i \in I} X_i,\] and the intersection of this family is denoted by \[ \bigcap_{i \in I} X_i.\] When \( I \) is a finite set with \( n \) elements, denoted as \( 1, 2, \dots, n \), the family of sets \( \{ X_i \}_{i \in I} \) can also be written as \[ \{ X_i \}_{i=1, 2, \dots, n},\] and the union is expressed as \[ \bigcup_{i=1}^n X_i \quad \text{or} \quad X_1 \cup X_2 \cup \cdots \cup X_n,\] and the intersection is written as \[ \bigcap_{i=1}^n X_i \quad \text{or} \quad X_1 \cap X_2 \cap \cdots \cap X_n.\] When \( I \) is the set of all positive integers, the union of the family of sets \( \{ X_i \}_{i \in I} \) is written as \[ \bigcup_{i=1}^{\infty} X_i,\] and the intersection is written as \[ \bigcap_{i=1}^{\infty} X_i.\]
 Let \( X \) and \( Y \) be two sets. The set of elements that are in \( Y \) but not in \( X \) is denoted by \( Y - X \). Specifically, when \( X \subset Y \), the set \( Y - X \) is called the complement of \( X \) with respect to \( Y \).
 When all the sets under consideration are subsets of a set \( U \), \( U \) is called the universal set. In this case, the set of elements that are not in a set \( X \) but are in the universal set \( U \) is simply called the complement of \( X \), and it is denoted by \[ U - X = X^c.\]
 The following De Morgan's Laws hold for complements: \[ \left( X \cup Y \right) ^c = X^c \cap Y^c \] \[ \left( X \cap Y \right) ^c = X^c \cup Y^c \]  The set of all subsets of a given set is called the power set. The power set of a set \( U \) is denoted by \[ \mathcal{P}(U).\]
 Let \( I = \{ 1, 2, \dots, n \} \) be a finite set, and let \( \{ X_i \}_{i=1, 2, \dots, n} \) be a family of sets indexed by \( I \). Then, by selecting one element \( x_i \) from each \( X_i \), we can form an ordered tuple \[ \left( x_1 , \ x_2 , \ \cdots , \ x_n \right). \] The set of all such ordered tuples is called the Cartesian product of the family \( \{ X_i \}_{i=1, 2, \dots, n} \) (or the Cartesian product of \( X_1, X_2, \dots, X_n \)). It is denoted by \[ X_1 \times X_2 \times \cdots \times X_n \quad \text{or} \quad \prod_{i=1}^n X_i.\]  For an element \( z = (x_1, x_2, \dots, x_n) \) of the Cartesian product \[ Z = X_1 \times X_2 \times \cdots \times X_n ,\] each \( x_i \) is called the component or the \( i \)th component of \( z \). Sometimes, the term coordinate is used instead of component.

 For a finite set \( X \), the number of elements in \( X \) is called the cardinality of \( X \), and it is denoted by \[ |X|.\] When \( X \) and \( Y \) are finite sets, the following holds regarding the cardinality: \[ |X \cup Y| = |X| + |Y| - |X \cap Y |\]
Sides
 An equation's equal sign \( (=) \) or an inequality's inequality signs \( (\lt, \gt, \leqq, \geqq) \) connect unknown variables, constants, and numbers. The unknown variables, constants, and numbers on the left side of the equal or inequality sign are called the left-hand side (LHS), while those on the right side are called the right-hand side (RHS).

Sinc function
 A function defined by the following equation: \[ {\rm {sinc}} \ x = \frac{\sin x}{x} \] For the sinc function, the following limit formula holds: \[ \lim _{x \to 0} {\rm {sinc}} \ x = \lim _{x \to 0} \frac{\sin x}{x} = 1 \]
Solution
 Focusing on a one variable in an equation and transforming the equation into the form \[\text{focused variable} =\] is called solving for that variable.
Example: In the equation \[x + 2y = 6,\] solving for \(y\) gives \[y = -\frac{1}{2}x + 3.\]
Square matrix
 A \( (n, n) \)-matrix is called an \( n \)th order square matrix or simply an \( n \)th order matrix. When \( A \) is an \( n \)th order matrix, the product \( AA \) is written as \( A^2 \) and is called the square of \( A \). Similarly, the product of \( k \) copies of \( A \) is written as \( A^k \) and is called the \( k \)th power of \( A \). For natural numbers \( k \) and \( l \), the following exponentiation rules hold: \[ \begin{align} & A^k A^l = A^{k+l} \\\\ & \left( A^k \right) ^l = A^{kl} \\\\ & AB = BA \ \ \text{implies} \ \ \left( AB \right) ^k = A^k B^k \end{align} \] Furthermore, if \( A \) is an invertible matrix, then defining \[ \begin{align} A^0 = E, \ \ A^{-k} = \left( A^{-1} \right) ^k \end{align}\] ensures that the exponentiation rules hold for any integer values of \( k \) and \( l \).

Square root
 The square root \( \sqrt{x} \) of a number \( x \) refers to the positive number that, when multiplied by itself twice, equals \( x \).
Example: The square root of \( 4 \) is \( \sqrt{4} = 2 \).

Straight line
[1] A function of the form \[ y = ax + b \quad (a, b \text{ are constants})\] which is also called a linear function. The coefficient \( a \) is called the slope or rate of change of the line, while \( b \) is called the \( y \)-intercept.

[2] A straight line that extends infinitely in both directions without thickness. A unique straight line is determined by specifying any two points it passes through.

Straight line segment
 The set of all points on the straight line between two points \( A \) and \( B \), including the points \( A \) and \( B \) themselves. The points \( A \) and \( B \) are called the endpoints of this line segment.

Substitution
 Replacing a certain letter or symbol with another letter, symbol, or number.
Example 1: Substituting \( x = 3 \) into \( y = 2x \) gives: \[ y = 2 \times 3 = 6 \] Example 2: Substituting \( a = 5c \) into \( b = 2a \) gives: \[ b = 2 \times 5c = 10c \]
Sum
 The result of addition.

Summation
 Let \( n \lt m \). The sum of the terms of the sequence \( \{ a_k \} \) from \( a_n \) to \( a_m \) is represented as \[ \sum _{k=n} ^m a_k\] That is, \[ \sum _{k=n} ^m a_k = a_n + a_{n+1} + \cdots + a_{m-1} + a_m .\]
Superposition principle
 For two solutions \( y_1 \) and \( y_2 \) of a linear homogeneous differential equation, if \( c_1 \) and \( c_2 \) are arbitrary constants, then \[ c_1 y_1 + c_2 y_2\] is also a solution of the same linear homogeneous differential equation. This property is called the superposition principle.

System of equations
 A set of multiple equations that hold true simultaneously.

System of linear equations
 A system of equations consisting only of linear equations is called a system of linear equations. In general, a system of \( n \) linear equations with \( n \) unknowns \( x_1, x_2, \ldots, x_n \) can be written as: \[ \begin{align} a_{11} x_1 + a_{12} x_2 + \cdots + a_{1n} x_n &= b_1 \\\\ a_{21} x_1 + a_{22} x_2 + \cdots + a_{2n} x_n &= b_2 \\\\ \ \ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\\\ a_{n1} x_1 + a_{n2} x_2 + \cdots + a_{nn} x_n &= b_n \end{align}\] Such a system is called an \( n \)-variable system of linear equations.
 Let the \( n \times n \) matrix \( A \), the column vector with \( n \) elements \( \boldsymbol{x} \), and the column vector with \( n \) elements \( \boldsymbol{b} \) be defined as: \[ \begin{align} A = \left( \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{n1} & a_{n2} & \ldots & a_{nn} \end{array} \right) , \ \boldsymbol x = \left( \begin{array}{cccc} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array} \right) , \ \boldsymbol b = \left( \begin{array}{cccc} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \end{array} \right) \end{align}\] Then, the above \( n \)-variable system of linear equations can be written in matrix form as: \[ \begin{align} A \boldsymbol x = \boldsymbol b \end{align}\] In this case, \( A \) is called the coefficient matrix of the system of equations.

T
Tangent
 A straight line that shares a point with a curve and has the same slope as the curve at that point.

Taylor's theorem
 Let \( f(x) \) be a \( C^{n-1} \)-function on \( [a,b] \) (or \( [b,a] \)) and assume that \( f \) is \( n \)-times differentiable on \( (a,b) \) (or \( (b,a) \)). Then there exists a point \( c \) with \( a \lt c \lt b \) (or \( b \lt c \lt a \)) such that \[ \begin{align} f(b) = \sum _{k=0} ^{n-1} \frac{f^{\left( k \right)} (a)}{k!} \left( b-a \right)^{k} + \frac{f^{\left( n \right)} (c)}{n!} \left( b-a \right)^n \end{align}\]
Term
[1] Each variable, constant, or number in an equation or inequality that is separated by a \( + \) or \( - \) sign. However, parts enclosed in parentheses are treated as a single term.
Example: In the equation \( 2x + (3a - 2) - 1 = 0 \), the terms on the left side are \( 2x \), \( (3a - 2) \), and \( 1 \), while the term on the right side is \( 0 \).

[2] An individual number that makes up a sequence.

Theorem of Pythagoras
[1] Pythagorean Theorem (Right Triangle)
 In a right triangle, let \( c \) be the length of the hypotenuse and \( a \), \( b \) be the lengths of the other two sides. Then, the following equation holds: \[ c^2 = a^2 + b^2 \] This is called the Pythagorean Theorem, also known as the Theorem of Three Squares in Japanese.

[2] Pythagorean Theorem (Vector Form)
 Let \( \boldsymbol{x} \) and \( \boldsymbol{y} \) be vectors with \( n \) elements that are orthogonal to each other. Then, the following equation holds: \[ \| \boldsymbol{x} + \boldsymbol{y} \| ^2 = \| \boldsymbol{x} \| ^2 + \| \boldsymbol{y} \| ^2 \] This is also referred to as the Pythagorean Theorem, applied in the context of vector spaces.

Total derivative
 A function of two variables \( f \left( x,y \right) \) is said to be totally differentiable at the point \( \alpha \left( a,b \right) \) if there exists some \( \delta \gt 0 \) such that, within the interior of the circle centered at \( \alpha \) with radius \( \delta \), \[ \begin{align} U \left( \alpha , \delta \right) = \left\{ \left( x,y \right) \ | \ \sqrt{ \left( x - a \right)^2 + \left( y - b \right)^2 } \lt \delta \right\}, \end{align}\] the function \( f \left( x,y \right) \) can be written as \[ \begin{align} f \left( x,y \right) = f \left( a,b \right) + (x-a) \frac{\partial f}{\partial x} \left( a,b \right) + (y-b) \frac{\partial f}{\partial y} \left( a,b \right) + \rho \left( x,y \right) C \left( x,y \right). \end{align}\] Here, \( C \left( x,y \right) \) is a two-variable function defined on \( U \left( \alpha , \delta \right) \), continuous at \( \left( a,b \right) \), and satisfying \( C \left( a,b \right) = 0 \). And, where, \[ \begin{align} \rho \left( x,y \right) = \sqrt{ \left( x - a \right)^2 + \left( y - b \right)^2 } \ . \end{align}\]
Translation
 The function \( y = f(x - a) \) is a translation of the function \( y = f(x) \) by \( a \) units in the \( x \) -axis direction. Similarly, the function \( y - a = f(x) \) is a translation of the function \( y = f(x) \) by \( a \) units in the \( y \) -axis direction.

Transposed matrix
 A \( (m, n) \)-matrix \( A \) that has its rows and columns swapped to form a new \( (n, m) \)-matrix is called the transpose matrix of \( A \) and is denoted as \( ^t A \). That is, if \[ \begin{align} A = \left( \begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{array} \right) \end{align}\] then its transpose is given by \[ \begin{align} ^t A = \left( \begin{array}{cccc} a_{11} & a_{21} & \ldots & a_{m1} \\ a_{12} & a_{22} & \ldots & a_{m2} \\ \vdots & \vdots & & \vdots \\ a_{1n} & a_{2n} & \ldots & a_{mn} \end{array} \right) \end{align}\]
 Regarding the transpose matrix, the following properties hold: \[ \begin{align} & ^t \left( ^t A \right) = A \\\\ & ^t \left( A ^ * \right) = \left( ^t A \right) ^* \\\\ & ^t \left( A + B \right) = {}^tA + {}^tB \\\\ & ^t \left( cA \right) = c \ {}^t A \\\\ & ^t \left( A B \right) = {}^t B \ {}^t A \end{align}\]
Transposition
 Move a term from the left side to the right side, or from the right side to the left side, while changing its sign.
Example: When moving the \( 2 \) from the left side of the equation \( 1 + 2 = 3 \), it becomes \( 1 = 3 - 2 \).

Triangle
 Three points that are not on the same straight line, when paired two by two and connected by line segments, form a triangle. The given three points are called the vertices of the triangle. The three line segments connecting the vertices are called the sides of the triangle. The interior of the angle formed by two adjacent sides is called the interior angle of the triangle.

Trigonometric function
 On the unit circle, take a point \( P (x,y) \) and represent the general angle as \( \theta \). In this case, the following three functions of \( \theta \) are called trigonometric functions. \[ \sin \theta = y\] \[ \cos \theta = x\] \[ \tan \theta = \frac{\sin \theta}{\cos \theta} = \frac{y}{x} \ (x \neq 0) \]
 The following formulas hold for trigonometric functions. Here, all variables of the trigonometric functions represent general angles. \[ \sin ^2 \theta + \cos ^2 \theta = 1 \] \[ \tan ^2 \theta + 1 = \frac{1}{\cos ^2 \theta} \] \[ \sin (- \theta ) = - \sin \theta \] \[ \cos (- \theta ) = \cos \theta \] \[ \tan (- \theta ) = - \tan \theta \] \[ \sin \theta = \cos \left( \theta - \frac{\pi}{2} \right) \] \[ \cos \theta = \sin \left( \theta + \frac{\pi}{2} \right) \] \[ - \sin \theta = \cos \left( \theta + \frac{\pi}{2} \right) \] \[ - \cos \theta = \sin \left( \theta - \frac{\pi}{2} \right) \]
\[ \text{Sum and difference formulas} \] \[ \sin ( \alpha + \beta ) = \sin \alpha \cos \beta + \cos \alpha \sin \beta \] \[ \sin ( \alpha - \beta ) = \sin \alpha \cos \beta - \cos \alpha \sin \beta \] \[ \cos ( \alpha + \beta ) = \cos \alpha \cos \beta - \sin \alpha \sin \beta \] \[ \cos ( \alpha - \beta ) = \cos \alpha \cos \beta + \sin \alpha \sin \beta \] \[ \tan ( \alpha + \beta ) = \frac{\tan \alpha + \tan \beta}{1 - \tan \alpha \tan \beta} \] \[ \tan ( \alpha - \beta ) = \frac{\tan \alpha - \tan \beta}{1 + \tan \alpha \tan \beta} \]
\[ \text{Composition of trigonometric functions} \] \[ a \sin \theta + b \cos \theta = \sqrt{a^2 + b^2} \sin \left( \theta + \alpha \right), \] where \[ \begin{align} \cos \alpha &= \frac{a}{\sqrt{a^2 + b^2}} \\\\ \sin \alpha &= \frac{b}{\sqrt{a^2 + b^2}}. \end{align}\]
\[ \text{Product to sum formula} \] \[ \begin{align} \sin \alpha \cos \beta &= \frac{1}{2} \left\{ \sin (\alpha + \beta) + \sin (\alpha - \beta) \right\} \\\\ \cos \alpha \sin \beta &= \frac{1}{2} \left\{ \sin (\alpha + \beta) - \sin (\alpha - \beta) \right\} \\\\ \cos \alpha \cos \beta &= \frac{1}{2} \left\{ \cos (\alpha + \beta) + \cos (\alpha - \beta) \right\} \\\\ \sin \alpha \sin \beta &= - \frac{1}{2} \left\{ \cos (\alpha + \beta) - \cos (\alpha - \beta) \right\} \\\\ \end{align}\]
\[ \text{Sum to product formula} \] \[ \begin{align} \sin A + \sin B &= 2 \sin \frac{A+B}{2} \cos \frac{A-B}{2} \\\\ \sin A - \sin B &= 2 \cos \frac{A+B}{2} \sin \frac{A-B}{2} \\\\ \cos A + \cos B &= 2 \cos \frac{A+B}{2} \cos \frac{A-B}{2} \\\\ \cos A - \cos B &= - 2 \sin \frac{A+B}{2} \sin \frac{A-B}{2} \end{align}\]
U
Unitary matrix
 A square matrix \( A \) is called a unitary matrix if it satisfies \( A A^{\dagger} = E \). In particular, a unitary matrix that is a real matrix is called an orthogonal matrix.

V
Value
 A number assigned to a letter or symbol.
Example: The equation \( x = 3 \) means that the value of \( x \) is \( 3 \).

Vandermonde polynomial
 The polynomial \( \Delta \left( x_1 , x_2 , \ldots , x_n \right) \) in \( x_1 , x_2 , \ldots , x_n \) represented by the following equation is called the Vandermonde polynomial of an ordered set of \( n \) variables \( x_1 , x_2 , \ldots , x_n \). \[ \begin{align} \Delta \left( x_1 , x_2 , \ldots , x_n \right) &= \prod _{i \lt j} \left( x_j - x_i \right) \\\\ &= \prod _{j=2} ^n \left\{ \prod _{i=1} ^{j-1} \left( x_j - x_i \right) \right\} \\\\ &= \left( x_n - x_{n-1} \right) \left( x_n - x_{n-2} \right) \cdots \left( x_n - x_{2} \right) \left( x_n - x_{1} \right) \times \\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( x_{n-1} - x_{n-2} \right) \cdots \left( x_{n-1} - x_{2} \right) \left( x_{n-1} - x_{1} \right) \times \\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \cdots \cdots \\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( x_{3} - x_{2} \right) \left( x_{3} - x_{1} \right) \times \\\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left( x_{2} - x_{1} \right) \end{align}\]
Variables
 When a letter or symbol is treated as something that can change in value, it is called a variable.

Variance
 When the expectation of a random variable \( X \) is denoted as \( E(X) \), the variance \( V(X) \) of \( X \) is defined by the following equation: \[ V(X) = E \left( \left( X - E(X) \right) ^2 \right) \] Variance \( V(X) \) can also be calculated using the formula: \[ V(X) = E(X^2) - \left\{ E(X) \right\}^2 \] Translation and scaling of variance
 For the variance of a random variable \( X \), if \( a \) is a constant, the following hold: \[ V(X+a) = V(X) \] \[ V(aX) = a^2 V(X) \] Variance in the product of probability spaces
 Let \( X \) be a random variable on the probability space \( \left( \Omega _1 , \mathcal{P} (\Omega _1) , P _1 \right) \) and \( Y \) be a random variable on the probability space \( \left( \Omega _2 , \mathcal{P} (\Omega _2) , P _2 \right) \). In the product of these two probability spaces, the variance of \( X+Y \) satisfies the following equation: \[ V(X+Y) = V(X) + V(Y) \]
Von Neumann stability analysis
 When solving a partial differential equation with the variable \( t \) representing time using a numerical method based on a certain finite difference scheme, if rounding errors grow indefinitely over time, the method is called unstable.
 Von Neumann stability analysis is a technique that uses the Fourier series of rounding errors to determine the conditions under which a finite difference method becomes unstable when numerically solving a linear partial differential equation with constant coefficients.

W
Weak Law of large numbers
 Let \( X \) be a random variable on the probability space \( \left( \Omega , \mathcal{P} (\Omega), P \right) \). Consider \( n \) random variables \( X_1, X_2, \dots, X_n \) on the same probability space that follow the same probability distribution as \( X \). Define the random variable: \[ Y_n = \frac{X_1}{n} + \frac{X_2}{n} + \cdots + \frac{X_n}{n}\] For any \( \epsilon > 0 \), the following holds: \[ \lim _{n \to \infty} P \left( \left| Y_n - E(X) \right| \geq \epsilon \right) = 0 \] This result is known as the Weak Law of Large Numbers.

X
Y
Z
Zero matrix
 A \( (m, n) \)-matrix in which all elements are zero is called a \( (m, n) \)-zero matrix and is denoted as \( O_{m, n} \). It may also be simply written as \( O \).
 For any \( (m, n) \)-matrix \( A \), the following properties hold: \[ \begin{align} & A + O = A \\\\ & A - A = O \\\\ & 0A = O \\\\ & AO = O \\\\ & OA = O \end{align}\]



References:
[1] James R. Newman, THE UNIVERSAL ENCYCLOPEDIA OF MATHEMATICS, George Allen & Unwin Ltd, 1964
[2] 石村園子, やさしく学べる微分積分, 共立出版, December 25, 1999
[3] David Burghes/Morag Borrie, Modelling with Differential Equations, Ellis Horwood Ltd, April 22, 1981
[4] 松坂和夫, 現代数学序説 ──集合と代数, 筑摩書房, December 6, 2017
[5] Wikipedia Double exponential function, https://en.wikipedia.org/wiki/Double_exponential_function, August 19, 2023
[6] 宮西正宜 23 others, 高等学校 数学Ⅱ 改訂版, 新興出版社啓林館, December 10, 2009
[7] 宮西正宜 24 others, 高等学校 数学A 改訂版, 新興出版社啓林館, December 10, 2008
[8] Wikipedia Probability axioms, https://en.wikipedia.org/wiki/Probability_axioms, September 23, 2023
[9] A.N.Kolmogorov, translated by Nathan Morrison, FOUNDATIONS OF THE THEORY OF PROBABILITY, CHELSEA PUBLISHING COMPANY NEW YORK, 1950
[10] 宮西正宜 23 others, 高等学校 数学C 改訂版, 新興出版社啓林館, December 10, 2010
[11] 三井斌友, Runge-Kutta 法-その過去, 現在, 未来-, 日本数学会, 総合講演・企画特別講演アブストラクト, 1998, 1998, Spring-Meeting, p.93-101
[12] 和達三樹, 物理のための数学 (物理入門コース10), 岩波書店, March 14, 1983
[13] Stanley J. Farlow, Partial Differential Equations for Scientists and Engineers, Dover Publications, September 1, 1993
[14] Gerd Grubb, Fourier expansions in higher dimensions, https://web.math.ku.dk/~grubb/notes/four2a.pdf, May 26, 2024
[15] 齋藤正彦, 線型代数入門, 東京大学出版会, March 31, 1966
[16] 小寺 平治, テキスト複素解析, 共立出版, October 28, 2010
[17] 岸 正倫・藤本担孝, 複素関数論, 学術図書出版社, January, 1980
[18] 難波 誠, 数学シリーズ 微分積分学, 裳華房, January 20, 2009