Econometrics

Disponible uniquement sur Etudier
  • Pages : 39 (9688 mots )
  • Téléchargement(s) : 0
  • Publié le : 27 mars 2011
Lire le document complet
Aperçu du document
Econometric Analysis

Marcelo Fernandes
Queen Mary, University of London

Topic 1: Matrix Algebra

reference:

Greene (1993) Econometric Analysis, chapter 2

Road map

1.1

1. terminology 2. algebraic manipulation 3. geometry of matrices 4. matrix partitioning 5. matrix calculus

Terminology

1.2

matrix

rectangular array of numbers a11  a A = [ank ] =  21  . .  .aN 1


a12 a22 . . . aN 2

··· ··· ... ···

a1K  a2K   .  .  . aN K



dimension number of rows and columns in a matrix, e.g., A is N ×K vector ordered set of numbers arranged either in a row or in a column, e.g., the first row of A is the row vector given by an· = (an1, an2, . . . , anK )

Types of matrices

1.3

square

if A has the same number of columns and rows (N =K) if ank = akn for all n and k (A = A′) square matrix whose only nonzero elements appear on the main diagonal moving from upper left to lower right diagonal matrix with ones on the diagonal, I N square matrix with only zeros either above or below the main diagonal (lower or upper triangular, resp.)

symmetric diagonal

identity triangular

Algebraic manipulation I

1.4

equality

A =B iff ank = bnk for all n and k B = A′ iff bnk = akn for all n and k −→ transpose of a transpose −→ symmetric matrices −→ column/row vectors

transposition

operations

addition and multiplication by inner product −→ conformability + linear combination −→ associative, commutative, and distributive laws −→ transposition: (A + B )′ and (A B )′

Algebraic manipulation II

1.5

sum ofvalues

N x n = ι′ x n=1

example: sample average sum of squares

x′ x example: sample variance x′ y example: sample covariance
1 ¯ Pι = N ι ι′ −→ Pιx = x ι Mι = I N − Pι −→ Mιx = x − x ι ¯ N n=1 xnyn =

N x2 = n=1 n

cross product

projection matrices

Geometry of matrices I

1.6

vector space

any set of vector that is closed under addition and scalar multiplication −→ spanningthrough linear combinations of basis vectors

linear independence matrix rank

α′A = 0 iff α = 0

dimension of the vector space spanned by the columns/rows of a matrix −→ number of linearly independent column/row vectors in the matrix rank(A) = rank(A′A) = rank(AA′) for any A

nice result

Geometry of matrices II

1.7

determinant

volume of the parallelotope formed by matrixcolumns −→ det(A) is nonzero iff A is full rank −→ det(A) = 0 iff columns are linearly dependent −→ det(D ) = N dn n=1 lenght of a vector, e = √

(Euclidean) norm application

e′e

least squares problem: minimize norm of e = y − X b solution: orthogonal projection −→ b such that e ⊥ X b = argminb (y − X b)′(y − X b) ⇒ X ′ (y − X b ) = 0

ˆ = argminb y − X b b

Matrix partitioning
partitiongrouping some of the elements in submatrices

1.8

A=

A11 A21

A12 : A22

AB =

A11B 11 + A12B 21 A21B 11 + A22B 21

A11B 12 + A12B 22 A21B 12 + A22B 22

det(A) = det(A22) · det(A11 − A21A−1A12) 11 block diagonal special case with A11 and A22 square

A11 A= 0

0 : A22

A′ A = A−1

A′ A11 11 0

0 A′ A22 22

A−1 11 = 0

0 A−1 22

det(A) = det(A11) · det(A22)

Matrixcalculus

1.9

consider a scalar-valued function f (x) of a vector x = (x1, . . . , xn), then the vector of partial derivatives, or gradient, reads ∂f (x)  = g (x ) = ∂x


∂f (x)/∂x1  . . . , ∂f (x)/∂xn


whereas the second derivatives matrix, or Hessian, is
 2 ∂ f (x)/∂x1∂x1  2  ∂ f (x)/∂x2∂x1 ∂ 2 f (x ) H (x ) = =  . ′ . . ∂ x∂ x  ∂ 2f (x)/∂xn∂x1

··· ··· ... ···

∂2f (x)/∂x1∂xn  ∂ 2f (x)/∂x2∂xn    . . .  ∂ 2f (x)/∂xn∂xn


Taylor expansion

1.10

quadratic approximation akin to the usual Taylor expansion, but involving the gradient (column vector) as well as the Hessian matrix ∂f (x) f (x) = f (x0) + (xi − x0i) ∂xi i=1 + 1 2 ∂ 2 f (x ) (xi − x0i)(xj − x0j ) i=1 j=1 ∂xi ∂xj
n n n

1 = f (x0) + g (x0)′(x − x0) + 2 (x − x0)′H (x0)(x − x0)...
tracking img