MTH603 Mid Term Past and Current Solved Paper Discussion
-
Below are all the finite difference methods EXCEPT _________.
jacobi’s method
newton’s backward difference method
Stirlling formula
Forward difference method@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
Below are all the finite difference methods EXCEPT _________.
jacobi’s method
newton’s backward difference method
Stirlling formula
Forward difference methodThe correct answer is:
Jacobi’s method
Explanation:
-
Jacobi’s Method: This is an iterative method for solving a system of linear equations, not a finite difference method.
-
Newton’s Backward Difference Method: This is a finite difference method used for interpolation and numerical differentiation.
-
Stirling Formula: This is used for interpolation in finite difference methods and approximates the values of the function using finite differences.
-
Forward Difference Method: This is used in finite difference methods to approximate derivatives and solve differential equations.
Thus, Jacobi’s method is not a finite difference method, so it is the correct choice for the given question.
-
-
Two matrices with the same characteristic polynomial need not be similar.
TRUE
FALSE -
@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
Two matrices with the same characteristic polynomial need not be similar.
TRUE
FALSETRUE
Explanation:
-
Similar Matrices: Two matrices (A) and (B) are similar if there exists an invertible matrix (P) such that (B = P^{-1}AP). Similar matrices share the same eigenvalues and their Jordan forms are the same, which means they have the same characteristic polynomial.
-
Characteristic Polynomial: Matrices that have the same characteristic polynomial are guaranteed to have the same eigenvalues, but this alone does not guarantee similarity. Similar matrices must also have the same Jordan canonical form or must be related by a similarity transformation.
Therefore, having the same characteristic polynomial is a necessary but not sufficient condition for similarity.
So, the correct statement is:
TRUE
-
-
The determinant of a diagonal matrix is the product of the diagonal elements.
True
False -
@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
The determinant of a diagonal matrix is the product of the diagonal elements.
True
FalseTrue
Explanation:
- For a diagonal matrix, all off-diagonal elements are zero. The determinant of a diagonal matrix is calculated as the product of its diagonal elements.
For example, if (D) is a diagonal matrix with diagonal elements (d_1, d_2, \ldots, d_n), then:
[ \text{det}(D) = d_1 \cdot d_2 \cdot \ldots \cdot d_n ]
So, the determinant of a diagonal matrix is indeed the product of its diagonal elements.
Therefore, the statement is:
True
-
The Gauss-Seidel method is applicable to strictly diagonally dominant or symmetric positive definite
matrices A.
True
False -
The Gauss-Seidel method is applicable to strictly diagonally dominant or symmetric positive definite
matrices A.
True
False@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
The Gauss-Seidel method is applicable to strictly diagonally dominant or symmetric positive definite
matrices A.
True
FalseTrue
Explanation:
-
Gauss-Seidel Method: This iterative method for solving linear systems converges under certain conditions.
-
Strictly Diagonally Dominant Matrices: If a matrix is strictly diagonally dominant (i.e., for each row of the matrix, the magnitude of the diagonal element is greater than the sum of the magnitudes of the other elements in that row), the Gauss-Seidel method will converge.
-
Symmetric Positive Definite Matrices: For symmetric positive definite matrices, the Gauss-Seidel method is guaranteed to converge.
Therefore, the Gauss-Seidel method is indeed applicable and convergent for strictly diagonally dominant or symmetric positive definite matrices.
So, the correct statement is:
True
-
-
The determinant of a _______ matrix is the product of the diagonal elements.
- Upper triangular
- Lower triangular
- Scalar
- Diagonal
-
The determinant of a _______ matrix is the product of the diagonal elements.
- Upper triangular
- Lower triangular
- Scalar
- Diagonal
@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
The determinant of a _______ matrix is the product of the diagonal elements.
- Upper triangular
- Lower triangular
- Scalar
- Diagonal
The determinant of a matrix is the product of the diagonal elements if the matrix is:
Diagonal, Upper Triangular, or Lower Triangular
Explanation:
-
Diagonal Matrix: All off-diagonal elements are zero. The determinant is the product of the diagonal elements.
-
Upper Triangular Matrix: All elements below the main diagonal are zero. The determinant is the product of the diagonal elements.
-
Lower Triangular Matrix: All elements above the main diagonal are zero. The determinant is the product of the diagonal elements.
-
Scalar Matrix: A diagonal matrix where all diagonal elements are the same scalar value. The determinant is still the product of the diagonal elements.
Therefore, the correct options are:
Diagonal, Upper Triangular, Lower Triangular
-
For differences methods we require the set of values.
True
False -
@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
For differences methods we require the set of values.
True
FalseTrue
Explanation:
-
Difference Methods: These methods, such as finite difference methods, require a set of discrete values to approximate derivatives or solve differential equations.
-
For example, in numerical differentiation, you need discrete data points (or values) to estimate the derivatives using forward, backward, or central differences. Similarly, in finite difference methods for solving partial differential equations, a grid of discrete values is used to approximate solutions.
Thus, the statement that difference methods require a set of values is indeed correct.
So, the correct answer is:
True
-
-
If x is an eigen value corresponding to eigen value of V of a matrix A. If a is any constant, then x – a is an eigen value corresponding to eigen vector V is an of the matrix A - a I.
True
False -
If x is an eigen value corresponding to eigen value of V of a matrix A. If a is any constant, then x – a is an eigen value corresponding to eigen vector V is an of the matrix A - a I.
True
False@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
If x is an eigen value corresponding to eigen value of V of a matrix A. If a is any constant, then x – a is an eigen value corresponding to eigen vector V is an of the matrix A - a I.
True
FalseTrue
Explanation:
-
If ( x ) is an eigenvalue of a matrix ( A ) corresponding to an eigenvector ( V ), this means:
[ A V = x V ] -
If ( a ) is any constant, then ( x - a ) will be an eigenvalue of the matrix ( A - aI ), where ( I ) is the identity matrix.
-
To see why, consider:
[ (A - aI) V = A V - a I V ]
[ = x V - a V ]
[ = (x - a) V ] -
This shows that ( V ) is still an eigenvector of ( A - aI ), but now with the eigenvalue ( x - a ).
So, the statement is:
True
-
-
Central difference method seems to be giving a better approximation, however it requires more computations.
True
False -
Central difference method seems to be giving a better approximation, however it requires more computations.
True
False@zaasmi said in MTH603 Mid Term Past and Current Solved Paper Discussion:
Central difference method seems to be giving a better approximation, however it requires more computations.
True
FalseTrue
Explanation:
-
Central Difference Method: This method approximates derivatives by averaging the forward and backward differences:
[ f’(x) \approx \frac{f(x + h) - f(x - h)}{2h} ] -
Accuracy: The central difference method is often more accurate than the forward or backward difference methods because it uses information from both sides of the point where the derivative is being approximated. It has a smaller truncation error and provides a better approximation to the derivative.
-
Computations: While it is more accurate, the central difference method requires evaluating the function at two points (both ( x + h ) and ( x - h )), as opposed to just one point for forward or backward differences. This requires more function evaluations and, therefore, more computational effort.
So, the statement that the central difference method gives a better approximation but requires more computations is:
True
-
-
Iterative algorithms can be more rapid than direct methods.
True
False