(a) Derive the MLE for β and Ω. Show that they indeed maximize the likelihood. (b) Compute the information matrix for β and vech(Ω). (c) Compute the asymptotic variance of MLEs (a) Derive the MLE for β and Ω. Show that they indeed maximize the likelihood. (b) Compute the information matrix for β and vech(Ω). (c) Compute the asymptotic variance of MLEs

AECO621 : Problem Set 1Due in class on Feb 23, 2017 1. Suppose Y ∈Rn×p, X ∈Rn×m and B ∈Rm×p. Show that (Y −XB)0(Y −XB)≥L Y 0MXY, where≥L denotesL¨ownerpartialordering, MX = I−PX = I−X

(X0X)−X0, andthattheequalityholdsifandonlyif B takesthevalue B∗ = (X0X)−X0Y . 2. Suppose x∈Rp, y ∈Rq are random vectors and that z = x y with cov(z) = Σxx Σxy Σyx

Σyy  Find F ∈Rq×p that minimizes cov(z−Fx). 3. Suppose we have a linear regression model yi = Xiβ + εi (i = 1,∙∙∙ ,n), where β is k ×1 vector of parameters and Xi are

iid with finite second moment. Assume also εi|Xi i.i.d. ∼ N(0,Ω) with Ω positive definite. (a) Derive the MLE for β and Ω. Show that they indeed maximize the

likelihood. (b) Compute the information matrix for β and vech(Ω). (c) Compute the asymptotic variance of MLEs

find the cost of your paper