(Translated by https://www.hiragana.jp/)
Talk:Square root of a matrix - Wikipedia Jump to content

Talk:Square root of a matrix

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 80.180.2.4 (talk) at 21:46, 12 December 2014 (→‎Proof at he beginning of the Properties section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconMathematics Start‑class Mid‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-priority on the project's priority scale.

Proof of unitary freedom

I think the proof of the unitary freedom of roots is not correct. It assumes that the rows of B, the positive root of T, span Cn. This is equivalent to saying B is invertible, which need not to be the case. B won't be invertible if T isn't, and T will only be invertible if it is positive-definite. If it is positive-semidefinite, having some null eigenvalues, T will be singular. For an example, pick T = 0, then B = 0, but the columns of B don't span anything at all. Daniel Estévez (talk) 12:46, 6 December 2008 (UTC)[reply]

in the finite dimensional setting, a partial isometry can be extended to a unitary. for the same reason, U can be taken to be unitary in the polar decomposition UP of a matrix. Mct mht (talk) 17:16, 6 December 2008 (UTC)[reply]
I really don't see how that helps. In the proof it is said that each column of A is linear combination of only the columns of B, because the columns of B span Cn. Moreover, I dont see how B can be an orthogonal basis for Cn. If T diagonalizes as T = UDU*, with eigenvalues real and not negative, because T is positive semidefinite. Then you pick B as B = UD1/2U*. U is an orthonormal basis for Cn, but B isn't. What I'm saying is that if T (and then B) is positive definite, then the proof is OK, but if T is only positive semidefinite, then B has kernel, and so its columns cannot span the whole Cn. Of course I may be completely wrong, as I am not really into linear algebra/functional analysis Daniel Estévez (talk) 18:11, 6 December 2008 (UTC)[reply]
if A*A = B*B, the proof constructs a partial isometry whose initial and final subspaces are Ran(A) and Ran(B). when A (and therefore B) is not invertible, their ranges are not all of C^n, as you say. this is where one uses "in the finite dimensional setting, a partial isometry can be extended to a unitary." call this unitary U, then AUえーゆー*UA = B*B. Mct mht (talk) 21:30, 6 December 2008 (UTC)[reply]
Thanks. Now I understand how the proof works, provided you can construct the partial isometry. But I think this is not stated very clearly in the current proof. I would be glad if you reread the proof, adding this details you have told me. Daniel Estévez (talk) 10:22, 7 December 2008 (UTC)[reply]

Cholesky vs square root

This article states that Cholesky decomposition gives the square root. This is, I think, a mistake. B.B = A and L.L^T = A are not the same, and B will not equal L, unless L is diagonal. c.f. http://yarchive.net/comp/sqrtm.html I have edited this article appropriately --Winterstein 11:03, 26 March 2007 (UTC)[reply]

well, the Cholesky decomposition gives a square root. the term "square root" are used in different senses in the two sections. Mct mht 11:09, 26 March 2007 (UTC)[reply]
At the beginning of section Square root of positive operators the article defines the square root of A as the operator B for which A = B*B and in the next sentence it denotes the operator T½ to mean the matrix for which T = (T½)2. This was very confusing until I realized that T½ is necessarily self-adjoint and that it is therefore also a square-root in the former sense and that only the self-adjoint square root is unique. Is this reasoning correct? --Drizzd (talk) 11:17, 9 February 2008 (UTC)[reply]
yes, more precisely the positive square root is unique. Mct mht (talk) 17:23, 6 December 2008 (UTC)[reply]
surely this should be changed then. Cholesky decomposition does not give a unique root. If L is considered a root because L.L^T = A then L.R where R is orthogonal is also a root since L.R.R^T.L^T = L.L^T = A . —Preceding unsigned comment added by Parapunter (talkcontribs) 05:01, 3 March 2010 (UTC)[reply]

Calculating the square root of a diagonizable matrix

The description on calculating the square root of a diagonizable matrix could be improved.

Currently is takes the matrix of eigen vectors as a given, then takes steps to calculate the eigen values from this. It is a very rare situation to have eigen vectors before you have eigen values. They are often calculated simultaneously or for small matrices the eigen values are found first, by finding the roots of the characteristic polynomial.

I realize it is easier to describe the step from eigen vectors to eigen values in matrix-notation tah the other way around, but the description should decide whether it wants to be a recipe or a theorem. If it's a recipe, it should have practical input and if its a theorem, the eigenvalues should be given alongside the eigenvectors.

Please comment if you believe this to be a bad idea. I will fix the article in some weeks if no one stops me - if I remember. :-) -- Palmin

Good point. Please do fix it. As an alternative, perhaps consider diagonalization as one step (and refer to diagonalizable matrix), but if you think it's better to spell it out in more detail, be my guest! -- Jitse Niesen (talk) 00:34, 15 March 2007 (UTC)[reply]
As I didn't see any edits from you, I rewrote the section myself. -- Jitse Niesen (talk) 12:38, 20 May 2007 (UTC)[reply]
I'm disappointed to see that's been up for years ignoring the fact that matrices in general have negative eigenvales. 175.45.147.38 (talk) 12:43, 1 September 2010 (UTC)[reply]

There are at least 2 major mistakes in the second half of the article. The definition of the square root of a matrix is wrong: given A, A^(1/2)means that A^(1/2) times A^(1/2) = A, and not A^(1/2)* times A^(1/2) [e.g. see Horn & Johnson - Matrix Analisys].

The other mistake is in the Polar Decomposition definition: a positive definite (not simply positive) matrix M can be decomposed as M=HP, where H is an hermitean (symmetric in the real case) matrix and P is a unitary (othogonal) matrix [e.g. see Bhatia - Matrix Analisys]. The fact that M is nonsingular is not enough to prove the theorem: There could be negative eigenvalues whose effect would an imaginary diagonal element in D^(1/2). [Giuseppe Cammarata] —Preceding unsigned comment added by 88.75.149.153 (talk) 19:44, 7 April 2011 (UTC)[reply]

I'm not so certain these are mistakes: they can probably all be explained by different conventions in different fields. (I seem to remember having the same reaction at first, but then seeing sources that supported the usage in the article.) That said, I agree that the article should be brought into accord with what seems to be the prevailing conventions. It's going to involve a bit of careful work and rewriting though, but I would support you in your efforts to correct the article. Sławomir Biały (talk) 20:41, 7 April 2011 (UTC)[reply]

Unitary freedom of square roots of positive operators

The current proof seems to make too many assumptions. For a strictly positive T = AA* = BB*, with B = T½ defined as the unique non-negative square root of T, then it's true that B is invertible, and you can construct U = B−1A. However, the given proof that U is unitary also seems to assume that A is invertible:

Is it necessarily the case that A is invertible? Wouldn't it be easier to use the equivalent definitions T = A*A = B*B (where again B = T½ is the unique non-negative square root of T) and construct U = AB−1 instead? That way, the transformation A=UB can be proved unitary without assuming that A is invertible:

Rundquist (talk) 07:41, 3 February 2012 (UTC)[reply]

I updated the article to reflect the changes I suggested above. Rundquist (talk) 07:44, 9 February 2012 (UTC)[reply]
Of course it is not true in general that A is invertible. The original proof addressed this and this was already discussed in the first section of this very talk page. And what is your B−1? If A is not invertible, B won't be either. The statement "...transformation A=UB can be proved unitary without assuming that A is invertible" is just non-sense.
Unfortunately, this article has become a bit of a mess... Mct mht (talk) 21:28, 12 March 2012 (UTC)[reply]
Yes please take it in hand, if you're willing. Sławomir Biały (talk) 02:05, 13 March 2012 (UTC)[reply]


Proof at he beginning of the Properties section

I think the proof has some flaws. In fact it says that the property of having different square root matrices stems from the fact that the matrix is diagonalizable (ie it can be written as ); but it is clearly wrong, as the identity matrix is diagonalizable but, as reported above, it has infinite square roots. Maybe it should be make explicit in the proof that the key property is that every eigenvalue is distinct ? (I didn't check if it would work that way, though.. Anyhow it is wrong as currently stated ) — Preceding unsigned comment added by 80.180.2.4 (talk) 21:15, 11 December 2014 (UTC)[reply]

I moved your comment to the bottom of the list, as is customary, from the top. It must be midnight in Parabiago. What part of the up-front qualifying statement of having n distinct eigenvalues is not clear? Does any identity matrix have more than one distinct eigenvalue? Cuzkatzimhut (talk) 21:43, 11 December 2014 (UTC)[reply]

I am saying that the "proof" only use the fact that the matrix is diagonalizable, nowhere is used the fact that it must have distinct eigenvalues for the proof to work. So there are flaws in the way it is currently presented. The fact that the distinct eigenvalues condition is essential is easily seen thinking about the identity matrix, that has an infinite number of roots, so the argument presented has no meaning.