Please cite the publication as :
Donoho,
D.,
M.
Elad,
and
V.
Temlyakov,
"Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise",
Transactions on Information Theory
, 52.
Please cite the companion website as :
Donoho, D., M. Elad, and V. Temlyakov, "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise", RunMyCode companion website, http://www.execandshare.org/CompanionSite/Site115
Variable/Parameters  Description, constraint  Comments 

No. Trials  Number of trials considered.  
Signal Length  The length of the signal. 
Variable/Parameters  Description  Visualisation 

No. Trials  Warning: with 100 simulations, it takes around 3 hours to run.  
Signal Length  It is 64 in this exercise. 
Computing Date  Status  Actions 

David Donoho
Stanford University
United States
Michael Elad
Technion  Israel Institute of Technology
Israel
Vladimir Temlyakov
University of South Carolina
United States
David Donoho also created these companion sites
HighDimensional CentrallySymmetric Polytopes With Neighborliness Proportional to Dimension
Abstract
Let A be a d by n matrix, d < n. Let C be the regular cross polytope (octahedron) in
Rn. It has recently been shown that properties of the centrosymmetric polytope P = AC are
of interest for finding sparse solutions to the underdetermined system of equations y = Ax;
[9]. In particular, it is valuable to know that P is centrally kneighborly.
We study the face numbers of randomlyprojected crosspolytopes in the proportional dimensional
case where dn, where the projector A is chosen uniformly at random from
the Grassmann manifold of ddimensional orthoprojectors of Rn. We derive ?N(d) > 0 with
the property that, for any ? < ?N(d), with overwhelming probability for large d, the number
of kdimensional faces of P = AC is the same as for C, for 0 k d. This implies that
P is centrally bdcneighborly, and its skeleton Skel[? d](P) is combinatorially equivalent to
Skel[? d]©. We display graphs of ?N.
Two weaker notions of neighborliness are also important for understanding sparse solutions
of linear equations: facial neighborliness and sectional neighborliness [9]; we study
both. The weakest, (k, e)facial neighborliness, asks if the kfaces are all simplicial and if
the numbers of kdimensional faces fk(P) >= fk(C)(1  e). We characterize and compute
the critical proportion ?F (d) > 0 at which phase transition occurs in k/d. The other, (k, e)
sectional neighborliness, asks whether all, except for a small fraction epsilon, of the kdimensional
intrinsic sections of P are kdimensional crosspolytopes. (Intrinsic sections intersect P with
kdimensional subspaces spanned by vertices of P.) We characterize and compute a proportion
?S(d) > 0 guaranteeing this property for k/d ~ ? < ?S(d). We display graphs of ?S and
?F.
Donoho,
D.,
"HighDimensional CentrallySymmetric Polytopes With Neighborliness Proportional to Dimension",
Discrete & Computational Geometry, 35, 617652.
Authors:
Donoho
Coders:
Donoho
Last update
10/08/2012
Ranking
16
Runs
3
Visits
33

The Optimal Hard Threshold for Singular Values is 4/sqrt(3)
Abstract
We consider recovery of lowrank matrices from noisy data by hard thresholding of singular values, where singular values below a prescribed threshold \lambda are set to 0. We study the asymptotic MSE in a framework where the matrix size is large compared to the rank of the matrix to be recovered, and the signaltonoise ratio of the lowrank piece stays constant. The AMSEoptimal choice of hard threshold, in the case of nbyn matrix in noise level \sigma, is simply (4/\sqrt{3}) \sqrt{n}\sigma \approx 2.309 \sqrt{n}\sigma when \sigma is known, or simply 2.858\cdot y_{med} when \sigma is unknown, where y_{med} is the median empirical singular value. For nonsquare m by n matrices with m \neq n, these thresholding coefficients are replaced with different provided constants. In our asymptotic framework, this thresholding rule adapts to unknown rank and to unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, no matter what the matrix is that we are trying to recover, and is always better than ideal Truncated SVD (TSVD), which truncates at the true rank of the lowrank matrix we are trying to recover. Hard thresholding at the recommended value to recover an nbyn matrix of rank r guarantees an AMSE at most 3nr\sigma^2. In comparison, the guarantee provided by TSVD is 5nr\sigma^2, the guarantee provided by optimally tuned singular value soft thresholding is 6nr\sigma^2, and the best guarantee achievable by any shrinkage of the data singular values is 2nr\sigma^2. Empirical evidence shows that these AMSE properties of the 4/\sqrt{3} thresholding rule remain valid even for relatively small n, and that performance improvement over TSVD and other shrinkage rules is substantial, turning it into the practical hard threshold of choice.
Gavish,
M.,
and
D.
Donoho,
"The Optimal Hard Threshold for Singular Values is 4/sqrt(3)",
Stanford University.
Authors:
Donoho
Gavish
Coders:
Gavish
Donoho Last update
05/30/2013
Ranking
9999
Runs
17
Visits
N.A.

Neighborliness of RandomlyProjected Simplices in High Dimensions
Abstract
Let A be a d by n matrix, d < n. Let T = T^n1 be the standard regular simplex in R^n. We count the faces of the projected simplex AT in the case where the projection is random, the dimension d is large and n and d are comparable: d ~ dn, d in (0, 1). The projector A is chosen uniformly at random from the Grassmann manifold of ddimensional orthoprojectors
of R^n. We derive ?N( d) > 0 with the property that, for any ? < ? N( deter), with overwhelming probability for large d, the number of kdimensional faces of P = AT is exactly the same as for T, for 0<=k<= ?d. This implies that P is [?d]neighborly, and its skeleton Skel[? d] ( P) is combinatorially equivalent to Skel[?d] (T). We display graphs of ?N. We also study a weaker notion of neighborliness it asks if the kfaces are all simplicial and if the numbers of kdimensional faces fk(P) >= fk(T)(1e). This was already considered by Vershik and Sporyshev, who obtained qualitative results about the existence of a threshold ? VS(d) > 0 at which phase transition occurs in k/d. We compute and display ?VS and compare to ?N.
Our results imply that the convex hull of n Gaussian samples in R^d, with n large and proportional to d, ‘looks like a simplex’ in the following sense. In a typical realization of such a highdimensional Gaussian point cloud d~ dn, all points are on the boundary of the convex hull, and all pairwise line segments, triangles, quadrangles, …, [?d]angles are on the boundary, for ?<? N(d/n).
Our results also quantify a precise phase transition in the ability of linear programming to find the sparsest nonnegative solution to typical systems of underdetermined linear equations; when there is a solution with fewer than ?VS(d/n)d nonzeros, linear programming will find that solution.
Donoho,
D.,
and
J.
Tanner,
"Neighborliness of RandomlyProjected Simplices in High Dimensions ",
Proceedings of the National Academy of Sciences of the United States of America , 102, 94529457.
Authors:
Donoho
Tanner
Coders:
Donoho
Tanner Last update
10/08/2012
Ranking
19
Runs
3
Visits
52

Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming
Abstract
Consider an underdetermined system of linear equations y = Ax with known d*n matrix
A and known y. We seek the sparsest nonnegative solution, i.e. the nonnegative x with fewest
nonzeros satisfying y = Ax. In general this problem is NPhard. However, for many matrices
A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be
found by linear programming. In classical convex polytope theory, a polytope P is called kneighborly if every set of k
vertices of P span a face of P. Let aj denote the jth column of A, 1<=_j<=_n, let a0 = 0 and
let P denote the convex hull of the aj . We say P is outwardly kneighborly if every subset
of k vertices not including 0 spans a face of P. We show that outward kneighborliness is
completely equivalent to the statement that, whenever y = Ax has a nonnegative solution
with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum.
Using this and classical results on polytope neighborliness we obtain two types of corollaries.
First, because many [d/2]neighborly polytopes are known, there are many systems
where the sparsest solution is available by convex optimization rather than combinatorial
optimization  provided the answer has fewer nonzeros than half the number of equations.
We mention examples involving incompletelyobserved Fourier transforms and Laplace transforms.
Second, results on classical neighborliness of highdimensional randomlyprojected simplices
imply that, if A is a typical uniformlydistributed random orthoprojector with n = 2d
and n large, the sparsest nonnegative solution to y = Ax can be found by linear programming
provided it has fewer nonzeros than 1/8 the number of equations.
We also consider a notion of weak neighborliness, in which the overwhelming majority
of ksets of aj's not containing 0 span a face. This implies that most nonnegative vectors
x with k nonzeros are uniquely determined by y = Ax. As a corollary of recent work
counting faces of random simplices, it is known that most polytopes P generated by large n
by 2n uniformlydistributed orthoprojectors A are weakly kneighborly with k _~.558n. We
infer that for most n by 2n underdetermined systems having a sparse solution with fewer
nonzeros than roughly half the number of equations, the sparsest solution can be found by
linear programming.
Donoho,
D.,
and
J.
Tanner,
"Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming ",
Proceedings of the National Academy of Sciences of the United States of America, 102, 9446–9451.
Authors:
Donoho
Tanner
Coders:
Donoho
Tanner Last update
10/08/2012
Ranking
17
Runs
6
Visits
81

On the Stability of the Basis Pursuit in the Presence of Noise
Abstract
Given a signal S ( R^N and a fullrank matrix D ( R^NL with N<L, we define the signal’s overcomplete representation as a ( R^L satisfying S=Da. Among the infinitely many solutions of this underdetermined linear system of equations, we have special interest in the sparsest representation, i.e., the one minimizing a0. This problem has a combinatorial flavor to it, and its direct solution is impossible even for moderate L. Approximation algorithms are thus required, and one such appealing technique is the basis pursuit (BP) algorithm. This algorithm has been the focus of recent theoretical research effort. It was found that if indeed the representation is sparse enough, BP finds it accurately. When an error is permitted in the composition of the signal, we no longer require exact equality S=Da. The BP has been extended to treat this case, leading to a denoizing algorithm. The natural question to pose is how the abovementioned
theoretical results generalize to this more practical mode of operation. In this paper we propose such a generalization. The behavior of the basis pursuit in the presence of noise has been the subject of two independent very wide contributions released for publication very recently. This paper is another contribution in this direction, but as opposed to the others mentioned, this paper aims to present a somewhat simplified picture of the topic, and thus could be referred to as a primer to this field. Specifically, we establish here the stability of the BP in the presence of noise for sparse enough representations. We study both the case of a general dictionary D, and a special case where D is built as a union of orthonormal bases. This work is a direct generalization of noiseless BP study, and indeed, when the noise power is reduced to zero, we obtain the known results of the noiseless BP.
Donoho,
D.,
and
M.
Elad,
"On the Stability of the Basis Pursuit in the Presence of Noise ",
Signal Processing , 86 , 511532.
Authors:
Donoho
Elad
Coders:
Donoho
Elad Last update
10/08/2012
Ranking
29
Runs
N.A.
Visits
63

Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices
Abstract
In compressed sensing, one takes n < N samples of an N dimensional vector x0 using an n × N matrix A, obtaining undersampled measurements y = Ax0 . For random matrices with Gaussian i.i.d entries, it is known that, when x0 is ksparse, there is a precisely determined phase transition: for a certain region in the (k/n, n/N )phase diagram, convex optimization min x_1 subject to y = Ax, x ∈ X^N typically ﬁnds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property – with the same phase transition location – holds for a wide range of nonGaussian random matrix ensembles. We consider speciﬁc deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, DelsarteGoethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical ksparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefﬁcients constrained to X^N for four different sets X ∈ {[0, 1], R_+ , R, C}. We establish this ﬁnding for each of the associated four phase transitions.
Monajemi,
H.,
D.
Donoho,
"Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices",
Stanford University.
Authors:
Monajemi
Jafarpour Gavish Donoho
Coders:
Monajemi
Donoho Last update
01/04/2013
Ranking
9999
Runs
13
Visits
86

Michael Elad also created these companion sites
Why Simple Shrinkage is Still Relevant for Redundant Representations?
Abstract
Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal’s representation is enforced using a unitary transform. Still, shrinkage is also practiced with nonunitary, and even redundant representations, typically leading to very satisfactory results. In this paper we shed some light on this behavior. The main argument in this paper is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, we develop in this paper a simple iterative procedure for the BPDN minimization that amounts to stepwise shrinkage. We demonstrate how the simple shrinkage emerges as the first iteration of this novel algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect.
Elad,
M.,
"Why Simple Shrinkage is Still Relevant for Redundant Representations?",
IEEE Transactions on Information Theory , 52, 55595569.
Authors:
Elad
Coders:
Elad
Last update
07/06/2012
Ranking
28
Runs
4
Visits
30

On the Stability of the Basis Pursuit in the Presence of Noise
Abstract
Given a signal S ( R^N and a fullrank matrix D ( R^NL with N<L, we define the signal’s overcomplete representation as a ( R^L satisfying S=Da. Among the infinitely many solutions of this underdetermined linear system of equations, we have special interest in the sparsest representation, i.e., the one minimizing a0. This problem has a combinatorial flavor to it, and its direct solution is impossible even for moderate L. Approximation algorithms are thus required, and one such appealing technique is the basis pursuit (BP) algorithm. This algorithm has been the focus of recent theoretical research effort. It was found that if indeed the representation is sparse enough, BP finds it accurately. When an error is permitted in the composition of the signal, we no longer require exact equality S=Da. The BP has been extended to treat this case, leading to a denoizing algorithm. The natural question to pose is how the abovementioned
theoretical results generalize to this more practical mode of operation. In this paper we propose such a generalization. The behavior of the basis pursuit in the presence of noise has been the subject of two independent very wide contributions released for publication very recently. This paper is another contribution in this direction, but as opposed to the others mentioned, this paper aims to present a somewhat simplified picture of the topic, and thus could be referred to as a primer to this field. Specifically, we establish here the stability of the BP in the presence of noise for sparse enough representations. We study both the case of a general dictionary D, and a special case where D is built as a union of orthonormal bases. This work is a direct generalization of noiseless BP study, and indeed, when the noise power is reduced to zero, we obtain the known results of the noiseless BP.
Donoho,
D.,
and
M.
Elad,
"On the Stability of the Basis Pursuit in the Presence of Noise ",
Signal Processing , 86 , 511532.
Authors:
Donoho
Elad
Coders:
Donoho
Elad Last update
10/08/2012
Ranking
29
Runs
N.A.
Visits
63

Vladimir Temlyakov also created these companion sites
Other Companion Sites on same paper
Other Companion Sites relative to similar papers
Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming
Abstract
Consider an underdetermined system of linear equations y = Ax with known d*n matrix
A and known y. We seek the sparsest nonnegative solution, i.e. the nonnegative x with fewest
nonzeros satisfying y = Ax. In general this problem is NPhard. However, for many matrices
A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be
found by linear programming. In classical convex polytope theory, a polytope P is called kneighborly if every set of k
vertices of P span a face of P. Let aj denote the jth column of A, 1<=_j<=_n, let a0 = 0 and
let P denote the convex hull of the aj . We say P is outwardly kneighborly if every subset
of k vertices not including 0 spans a face of P. We show that outward kneighborliness is
completely equivalent to the statement that, whenever y = Ax has a nonnegative solution
with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum.
Using this and classical results on polytope neighborliness we obtain two types of corollaries.
First, because many [d/2]neighborly polytopes are known, there are many systems
where the sparsest solution is available by convex optimization rather than combinatorial
optimization  provided the answer has fewer nonzeros than half the number of equations.
We mention examples involving incompletelyobserved Fourier transforms and Laplace transforms.
Second, results on classical neighborliness of highdimensional randomlyprojected simplices
imply that, if A is a typical uniformlydistributed random orthoprojector with n = 2d
and n large, the sparsest nonnegative solution to y = Ax can be found by linear programming
provided it has fewer nonzeros than 1/8 the number of equations.
We also consider a notion of weak neighborliness, in which the overwhelming majority
of ksets of aj's not containing 0 span a face. This implies that most nonnegative vectors
x with k nonzeros are uniquely determined by y = Ax. As a corollary of recent work
counting faces of random simplices, it is known that most polytopes P generated by large n
by 2n uniformlydistributed orthoprojectors A are weakly kneighborly with k _~.558n. We
infer that for most n by 2n underdetermined systems having a sparse solution with fewer
nonzeros than roughly half the number of equations, the sparsest solution can be found by
linear programming.
Donoho,
D.,
and
J.
Tanner,
"Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming ",
Proceedings of the National Academy of Sciences of the United States of America, 102, 9446–9451.
Authors:
Donoho
Tanner
Coders:
Donoho
Tanner Last update
10/08/2012
Ranking
17
Runs
6
Visits
81

Why Simple Shrinkage is Still Relevant for Redundant Representations?
Abstract
Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal’s representation is enforced using a unitary transform. Still, shrinkage is also practiced with nonunitary, and even redundant representations, typically leading to very satisfactory results. In this paper we shed some light on this behavior. The main argument in this paper is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, we develop in this paper a simple iterative procedure for the BPDN minimization that amounts to stepwise shrinkage. We demonstrate how the simple shrinkage emerges as the first iteration of this novel algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect.
Elad,
M.,
"Why Simple Shrinkage is Still Relevant for Redundant Representations?",
IEEE Transactions on Information Theory , 52, 55595569.
Authors:
Elad
Coders:
Elad
Last update
07/06/2012
Ranking
28
Runs
4
Visits
30

Neighborliness of RandomlyProjected Simplices in High Dimensions
Abstract
Let A be a d by n matrix, d < n. Let T = T^n1 be the standard regular simplex in R^n. We count the faces of the projected simplex AT in the case where the projection is random, the dimension d is large and n and d are comparable: d ~ dn, d in (0, 1). The projector A is chosen uniformly at random from the Grassmann manifold of ddimensional orthoprojectors
of R^n. We derive ?N( d) > 0 with the property that, for any ? < ? N( deter), with overwhelming probability for large d, the number of kdimensional faces of P = AT is exactly the same as for T, for 0<=k<= ?d. This implies that P is [?d]neighborly, and its skeleton Skel[? d] ( P) is combinatorially equivalent to Skel[?d] (T). We display graphs of ?N. We also study a weaker notion of neighborliness it asks if the kfaces are all simplicial and if the numbers of kdimensional faces fk(P) >= fk(T)(1e). This was already considered by Vershik and Sporyshev, who obtained qualitative results about the existence of a threshold ? VS(d) > 0 at which phase transition occurs in k/d. We compute and display ?VS and compare to ?N.
Our results imply that the convex hull of n Gaussian samples in R^d, with n large and proportional to d, ‘looks like a simplex’ in the following sense. In a typical realization of such a highdimensional Gaussian point cloud d~ dn, all points are on the boundary of the convex hull, and all pairwise line segments, triangles, quadrangles, …, [?d]angles are on the boundary, for ?<? N(d/n).
Our results also quantify a precise phase transition in the ability of linear programming to find the sparsest nonnegative solution to typical systems of underdetermined linear equations; when there is a solution with fewer than ?VS(d/n)d nonzeros, linear programming will find that solution.
Donoho,
D.,
and
J.
Tanner,
"Neighborliness of RandomlyProjected Simplices in High Dimensions ",
Proceedings of the National Academy of Sciences of the United States of America , 102, 94529457.
Authors:
Donoho
Tanner
Coders:
Donoho
Tanner Last update
10/08/2012
Ranking
19
Runs
3
Visits
52

The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising
Abstract
Let X_0 be an unknown M by N matrix. In matrix recovery,
one takes n < MN linear measurements y_1, ... , y_n of X_0, where y_i
= Trace(a_i' X_0) and each a_i is a M by N matrix. For measurement
matrices with Gaussian i.i.d entries, it known that if X_0 is of low rank,
it is recoverable from just a few measurements. A popular approach for matrix
recovery is Nuclear Norm Minimization (NNM): solving the convex optimization
problem min X_* subject to y_i=Trace(a_i' X) for all
1<= i<= n, where  . _* denotes the nuclear norm, namely, the
sum of singular values. Empirical work reveals a phase transition
curve, stated in terms of the undersampling fraction \delta(n,M,N) = n/(MN),
rank fraction \rho=r/N and aspect ratio \beta=M/N. Specifically, a curve
\delta^* = \delta^*(\rho;\beta) exists such that, if \delta >
\delta^*(\rho;\beta), NNM typically succeeds, while if \delta <
\delta^*(\rho;\beta), it typically fails.
An apparently quite different problem is matrix denoising in Gaussian noise,
where an unknown M by N matrix X_0 is to be estimated based on direct
noisy measurements Y = X_0 + Z, where the matrix Z has iid Gaussian
entries. It has been empirically observed that, if X_0 has low rank, it may
be recovered quite accurately from the noisy measurement Y. A popular
matrix denoising scheme solves the unconstrained optimization problem
min  Y  X _F^2/2 + \lambda X_*. When optimally tuned,
this scheme achieves the asymptotic minimax MSE, M( \rho ) = \lim_{N>
\infty} \inf_\lambda \sup_{\rank(X) <= \rho * N}
MSE(X,\hat{X}_\lambda).
We report extensive experiments showing that the phase transition
\delta^*(\rho) in the first problem (Matrix Recovery from Gaussian
Measurements) coincides with the minimax risk curve M(\rho) in the second
problem (Matrix Denoising in Gaussian Noise): \delta^*(\rho) = M(\rho),
for any rank fraction 0 < \rho < 1.
Our experiments considered matrices belonging to two constraint classes: real
M by N matrices, of various ranks and aspect ratios, and real symmetric
positive semidefinite N by N matrices, of various ranks. Different
predictions M(\rho) of the phase transition location were used in the two
different cases, and were validated by the experimental data.
Gavish,
M.,
"The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising",
Stanford University.
Authors:
Donoho
Gavish Montanari
Coders:
Gavish
Last update
02/15/2013
Ranking
9999
Runs
8
Visits
N.A.

HighDimensional CentrallySymmetric Polytopes With Neighborliness Proportional to Dimension
Abstract
Let A be a d by n matrix, d < n. Let C be the regular cross polytope (octahedron) in
Rn. It has recently been shown that properties of the centrosymmetric polytope P = AC are
of interest for finding sparse solutions to the underdetermined system of equations y = Ax;
[9]. In particular, it is valuable to know that P is centrally kneighborly.
We study the face numbers of randomlyprojected crosspolytopes in the proportional dimensional
case where dn, where the projector A is chosen uniformly at random from
the Grassmann manifold of ddimensional orthoprojectors of Rn. We derive ?N(d) > 0 with
the property that, for any ? < ?N(d), with overwhelming probability for large d, the number
of kdimensional faces of P = AC is the same as for C, for 0 k d. This implies that
P is centrally bdcneighborly, and its skeleton Skel[? d](P) is combinatorially equivalent to
Skel[? d]©. We display graphs of ?N.
Two weaker notions of neighborliness are also important for understanding sparse solutions
of linear equations: facial neighborliness and sectional neighborliness [9]; we study
both. The weakest, (k, e)facial neighborliness, asks if the kfaces are all simplicial and if
the numbers of kdimensional faces fk(P) >= fk(C)(1  e). We characterize and compute
the critical proportion ?F (d) > 0 at which phase transition occurs in k/d. The other, (k, e)
sectional neighborliness, asks whether all, except for a small fraction epsilon, of the kdimensional
intrinsic sections of P are kdimensional crosspolytopes. (Intrinsic sections intersect P with
kdimensional subspaces spanned by vertices of P.) We characterize and compute a proportion
?S(d) > 0 guaranteeing this property for k/d ~ ? < ?S(d). We display graphs of ?S and
?F.
Donoho,
D.,
"HighDimensional CentrallySymmetric Polytopes With Neighborliness Proportional to Dimension",
Discrete & Computational Geometry, 35, 617652.
Authors:
Donoho
Coders:
Donoho
Last update
10/08/2012
Ranking
16
Runs
3
Visits
33

Bartlett's Formula for a General Class of Non Linear Processes
Abstract
A Bartletttype formula is proposed for the asymptotic distribution of the sample autocorrelations of nonlinear processes. The asymptotic covariances between sample autocorrelations are expressed as the sum of two terms. The first term corresponds to the standard Bartlett's formula for linear processes, involving only the autocorrelation function of the observed process. The second term, which is specific to nonlinear processes, involves the autocorrelation function of the observed process, the kurtosis of the linear innovation process and the autocorrelation function of its square. This formula is obtained under a symmetry assumption on the linear innovation process. It is illustrated on ARMA–GARCH models and compared to the standard formula. An empirical application on financial time series is proposed.
Francq,
C.,
and
J.
Zakoian,
"Bartlett's Formula for a General Class of Non Linear Processes",
Journal of Time Series Analysis, 30, 449465.
Authors:
Francq
Zakoian
Coders:
Francq
Zakoian Last update
07/23/2012
Ranking
8
Runs
65
Visits
522

The Optimal Hard Threshold for Singular Values is 4/sqrt(3)
Abstract
We consider recovery of lowrank matrices from noisy data by hard thresholding of singular values, where singular values below a prescribed threshold \lambda are set to 0. We study the asymptotic MSE in a framework where the matrix size is large compared to the rank of the matrix to be recovered, and the signaltonoise ratio of the lowrank piece stays constant. The AMSEoptimal choice of hard threshold, in the case of nbyn matrix in noise level \sigma, is simply (4/\sqrt{3}) \sqrt{n}\sigma \approx 2.309 \sqrt{n}\sigma when \sigma is known, or simply 2.858\cdot y_{med} when \sigma is unknown, where y_{med} is the median empirical singular value. For nonsquare m by n matrices with m \neq n, these thresholding coefficients are replaced with different provided constants. In our asymptotic framework, this thresholding rule adapts to unknown rank and to unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, no matter what the matrix is that we are trying to recover, and is always better than ideal Truncated SVD (TSVD), which truncates at the true rank of the lowrank matrix we are trying to recover. Hard thresholding at the recommended value to recover an nbyn matrix of rank r guarantees an AMSE at most 3nr\sigma^2. In comparison, the guarantee provided by TSVD is 5nr\sigma^2, the guarantee provided by optimally tuned singular value soft thresholding is 6nr\sigma^2, and the best guarantee achievable by any shrinkage of the data singular values is 2nr\sigma^2. Empirical evidence shows that these AMSE properties of the 4/\sqrt{3} thresholding rule remain valid even for relatively small n, and that performance improvement over TSVD and other shrinkage rules is substantial, turning it into the practical hard threshold of choice.
Gavish,
M.,
and
D.
Donoho,
"The Optimal Hard Threshold for Singular Values is 4/sqrt(3)",
Stanford University.
Authors:
Donoho
Gavish
Coders:
Gavish
Donoho Last update
05/30/2013
Ranking
9999
Runs
17
Visits
N.A.

Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices
Abstract
In compressed sensing, one takes n < N samples of an N dimensional vector x0 using an n × N matrix A, obtaining undersampled measurements y = Ax0 . For random matrices with Gaussian i.i.d entries, it is known that, when x0 is ksparse, there is a precisely determined phase transition: for a certain region in the (k/n, n/N )phase diagram, convex optimization min x_1 subject to y = Ax, x ∈ X^N typically ﬁnds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property – with the same phase transition location – holds for a wide range of nonGaussian random matrix ensembles. We consider speciﬁc deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, DelsarteGoethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical ksparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefﬁcients constrained to X^N for four different sets X ∈ {[0, 1], R_+ , R, C}. We establish this ﬁnding for each of the associated four phase transitions.
Monajemi,
H.,
D.
Donoho,
"Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices",
Stanford University.
Authors:
Monajemi
Jafarpour Gavish Donoho
Coders:
Monajemi
Donoho Last update
01/04/2013
Ranking
9999
Runs
13
Visits
86

Adaptive Estimation of Vector Autoregressive Models with TimeVarying Variance: Application to Testing Linear Causality in Mean
Abstract
Linear Vector AutoRegressive (VAR) models where the innovations could be unconditionally heteroscedastic and serially dependent are considered. The volatility structure is deterministic and quite general, including breaks or trending variances as special cases. In this framework we propose Ordinary Least Squares (OLS), Generalized Least Squares (GLS) and Adaptive Least Squares (ALS) procedures.
The GLS estimator requires the knowledge of the timevarying variance structure while in the ALS approach the unknown variance
is estimated by kernel smoothing with the outer product of the OLS residuals vectors. Different bandwidths for the different cells
of the timevarying variance matrix are also allowed. We derive the asymptotic distribution of the proposed estimators for the VAR
model coefficients and compare their properties. In particular we show that the ALS estimator is asymptotically equivalent to the
infeasible GLS estimator. This asymptotic equivalence is obtained uniformly with respect to the bandwidth(s) in a given range and
hence justifies datadriven bandwidth rules. Using these results we build Wald tests for the linear Granger causality in mean which
are adapted to VAR processes driven by errors with a non stationary volatility. It is also shown that the commonly used standard Wald
test for the linear Granger causality in mean is potentially unreliable in our framework (incorrect level and lower asymptotic power).
Monte Carlo and realdata experiments illustrate the use of the different estimation approaches for the analysis of VAR models with timevarying variance innovations.
Raïssi,
H.,
"Adaptive Estimation of Vector Autoregressive Models with TimeVarying Variance: Application to Testing Linear Causality in Mean",
IRMARINSA and CREST ENSAI.
Authors:
Patilea
Coders:
Raïssi
Last update
10/08/2012
Ranking
58
Runs
9
Visits
169

On the Stability of the Basis Pursuit in the Presence of Noise
Abstract
Given a signal S ( R^N and a fullrank matrix D ( R^NL with N<L, we define the signal’s overcomplete representation as a ( R^L satisfying S=Da. Among the infinitely many solutions of this underdetermined linear system of equations, we have special interest in the sparsest representation, i.e., the one minimizing a0. This problem has a combinatorial flavor to it, and its direct solution is impossible even for moderate L. Approximation algorithms are thus required, and one such appealing technique is the basis pursuit (BP) algorithm. This algorithm has been the focus of recent theoretical research effort. It was found that if indeed the representation is sparse enough, BP finds it accurately. When an error is permitted in the composition of the signal, we no longer require exact equality S=Da. The BP has been extended to treat this case, leading to a denoizing algorithm. The natural question to pose is how the abovementioned
theoretical results generalize to this more practical mode of operation. In this paper we propose such a generalization. The behavior of the basis pursuit in the presence of noise has been the subject of two independent very wide contributions released for publication very recently. This paper is another contribution in this direction, but as opposed to the others mentioned, this paper aims to present a somewhat simplified picture of the topic, and thus could be referred to as a primer to this field. Specifically, we establish here the stability of the BP in the presence of noise for sparse enough representations. We study both the case of a general dictionary D, and a special case where D is built as a union of orthonormal bases. This work is a direct generalization of noiseless BP study, and indeed, when the noise power is reduced to zero, we obtain the known results of the noiseless BP.
Donoho,
D.,
and
M.
Elad,
"On the Stability of the Basis Pursuit in the Presence of Noise ",
Signal Processing , 86 , 511532.
Authors:
Donoho
Elad
Coders:
Donoho
Elad Last update
10/08/2012
Ranking
29
Runs
N.A.
Visits
63

Frequently Asked Questions