Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices
Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices
By Hatef Monajemi, Sina Jafarpour, Matan Gavish, and David Donoho
Stanford University (2012)
Abstract Paper Stanford Digital Repository

Hatef Monajemi

Stanford University

United States

Coder Page  

David Donoho

Stanford University

United States

Coder Page  

This code generates the figures in the paper "Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices" by H. Monajemi, S. Jafarpour, Stat330/CME362 Collaboration, M. Gavish, and D. L. Donoho
Created
November 07, 2012
Software:
R 0.1
Visits
86
Last update
January 04, 2013
Ranking
9999
Runs
13
Code downloads
12
Abstract
In compressed sensing, one takes n < N samples of an N -dimensional vector x0 using an n × N matrix A, obtaining un-dersampled measurements y = Ax0 . For random matrices with Gaussian i.i.d entries, it is known that, when x0 is k-sparse, there is a precisely determined phase transition: for a certain region in the (k/n, n/N )-phase diagram, convex optimization min ||x||_1 subject to y = Ax, x ∈ X^N typically finds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property – with the same phase transition location – holds for a wide range of non-Gaussian random matrix ensembles. We consider specific deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical k-sparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefficients constrained to X^N for four different sets X ∈ {[0, 1], R_+ , R, C}. We establish this finding for each of the associated four phase transitions.
Monajemi, H., S. Jafarpour, M. Gavish, and D. Donoho, "Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices", Stanford University.
Figure number
Figure number
Waiting time

Please cite the publication as :

Monajemi, H., S. Jafarpour, M. Gavish, and D. Donoho, "Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices", Stanford University.

Please cite the companion website as :

Monajemi, H., S. Jafarpour, M. Gavish, and D. Donoho, "Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices", RunMyCode companion website, http://www.execandshare.org/CompanionSite/Site190

Reset data > >
Preview data > >
Load demo data > >
Variable/Parameters Description, constraint Comments
Figure number
    the number of the Figure you would like to reproduce. e.g. 1
    Variable/Parameters Description Visualisation
    Figure number figure number 1
    Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices
    H. Monajemi, and D. Donoho (2012)
    Computing Date Status Actions
    Coders:

    Hatef Monajemi also created these companion sites

    David Donoho also created these companion sites

    High-Dimensional Centrally-Symmetric Polytopes With Neighborliness Proportional to Dimension
    Abstract
    Let A be a d by n matrix, d < n. Let C be the regular cross polytope (octahedron) in Rn. It has recently been shown that properties of the centrosymmetric polytope P = AC are of interest for finding sparse solutions to the underdetermined system of equations y = Ax; [9]. In particular, it is valuable to know that P is centrally k-neighborly. We study the face numbers of randomly-projected cross-polytopes in the proportional dimensional case where dn, where the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of Rn. We derive ?N(d) > 0 with the property that, for any ? < ?N(d), with overwhelming probability for large d, the number of k-dimensional faces of P = AC is the same as for C, for 0 k d. This implies that P is centrally bdc-neighborly, and its skeleton Skel[? d](P) is combinatorially equivalent to Skel[? d]©. We display graphs of ?N. Two weaker notions of neighborliness are also important for understanding sparse solutions of linear equations: facial neighborliness and sectional neighborliness [9]; we study both. The weakest, (k, e)-facial neighborliness, asks if the k-faces are all simplicial and if the numbers of k-dimensional faces fk(P) >= fk(C)(1 - e). We characterize and compute the critical proportion ?F (d) > 0 at which phase transition occurs in k/d. The other, (k, e)- sectional neighborliness, asks whether all, except for a small fraction epsilon, of the k-dimensional intrinsic sections of P are k-dimensional cross-polytopes. (Intrinsic sections intersect P with k-dimensional subspaces spanned by vertices of P.) We characterize and compute a proportion ?S(d) > 0 guaranteeing this property for k/d ~ ? < ?S(d). We display graphs of ?S and ?F.
    Donoho, D., "High-Dimensional Centrally-Symmetric Polytopes With Neighborliness Proportional to Dimension", Discrete & Computational Geometry, 35, 617-652.
    Authors: Donoho
    Coders: Donoho
    Last update
    10/08/2012
    Ranking
    16
    Runs
    3
    Visits
    33
    The Optimal Hard Threshold for Singular Values is 4/sqrt(3)
    Abstract
    We consider recovery of low-rank matrices from noisy data by hard thresholding of singular values, where singular values below a prescribed threshold \lambda are set to 0. We study the asymptotic MSE in a framework where the matrix size is large compared to the rank of the matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant. The AMSE-optimal choice of hard threshold, in the case of n-by-n matrix in noise level \sigma, is simply (4/\sqrt{3}) \sqrt{n}\sigma \approx 2.309 \sqrt{n}\sigma when \sigma is known, or simply 2.858\cdot y_{med} when \sigma is unknown, where y_{med} is the median empirical singular value. For nonsquare m by n matrices with m \neq n, these thresholding coefficients are replaced with different provided constants. In our asymptotic framework, this thresholding rule adapts to unknown rank and to unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, no matter what the matrix is that we are trying to recover, and is always better than ideal Truncated SVD (TSVD), which truncates at the true rank of the low-rank matrix we are trying to recover. Hard thresholding at the recommended value to recover an n-by-n matrix of rank r guarantees an AMSE at most 3nr\sigma^2. In comparison, the guarantee provided by TSVD is 5nr\sigma^2, the guarantee provided by optimally tuned singular value soft thresholding is 6nr\sigma^2, and the best guarantee achievable by any shrinkage of the data singular values is 2nr\sigma^2. Empirical evidence shows that these AMSE properties of the 4/\sqrt{3} thresholding rule remain valid even for relatively small n, and that performance improvement over TSVD and other shrinkage rules is substantial, turning it into the practical hard threshold of choice.
    Gavish, M., and D. Donoho, "The Optimal Hard Threshold for Singular Values is 4/sqrt(3)", Stanford University.
    Authors: Donoho
    Gavish
    Coders: Gavish
    Donoho
    Last update
    05/30/2013
    Ranking
    9999
    Runs
    17
    Visits
    N.A.
    Neighborliness of Randomly-Projected Simplices in High Dimensions
    Abstract
    Let A be a d by n matrix, d < n. Let T = T^n-1 be the standard regular simplex in R^n. We count the faces of the projected simplex AT in the case where the projection is random, the dimension d is large and n and d are comparable: d ~ dn, d in (0, 1). The projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of R^n. We derive ?N( d) > 0 with the property that, for any ? < ? N( deter), with overwhelming probability for large d, the number of k-dimensional faces of P = AT is exactly the same as for T, for 0<=k<= ?d. This implies that P is [?d]-neighborly, and its skeleton Skel[? d] ( P) is combinatorially equivalent to Skel[?d] (T). We display graphs of ?N. We also study a weaker notion of neighborliness it asks if the k-faces are all simplicial and if the numbers of k-dimensional faces fk(P) >= fk(T)(1-e). This was already considered by Vershik and Sporyshev, who obtained qualitative results about the existence of a threshold ? VS(d) > 0 at which phase transition occurs in k/d. We compute and display ?VS and compare to ?N. Our results imply that the convex hull of n Gaussian samples in R^d, with n large and proportional to d, ‘looks like a simplex’ in the following sense. In a typical realization of such a high-dimensional Gaussian point cloud d~ dn, all points are on the boundary of the convex hull, and all pairwise line segments, triangles, quadrangles, …, [?d]-angles are on the boundary, for ?<? N(d/n). Our results also quantify a precise phase transition in the ability of linear programming to find the sparsest nonnegative solution to typical systems of underdetermined linear equations; when there is a solution with fewer than ?VS(d/n)d nonzeros, linear programming will find that solution.
    Donoho, D., and J. Tanner, "Neighborliness of Randomly-Projected Simplices in High Dimensions ", Proceedings of the National Academy of Sciences of the United States of America , 102, 9452-9457.
    Authors: Donoho
    Tanner
    Coders: Donoho
    Tanner
    Last update
    10/08/2012
    Ranking
    19
    Runs
    3
    Visits
    52
    Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise
    Abstract
    Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
    Donoho, D., M. Elad, and V. Temlyakov, "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise", Transactions on Information Theory, 52.
    Authors: Donoho
    Elad
    Temlyakov
    Coders: Donoho
    Elad
    Temlyakov
    Last update
    10/08/2012
    Ranking
    14
    Runs
    10
    Visits
    89
    Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming
    Abstract
    Consider an underdetermined system of linear equations y = Ax with known d*n matrix A and known y. We seek the sparsest nonnegative solution, i.e. the nonnegative x with fewest nonzeros satisfying y = Ax. In general this problem is NP-hard. However, for many matrices A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. In classical convex polytope theory, a polytope P is called k-neighborly if every set of k vertices of P span a face of P. Let aj denote the j-th column of A, 1<=_j<=_n, let a0 = 0 and let P denote the convex hull of the aj . We say P is outwardly k-neighborly if every subset of k vertices not including 0 spans a face of P. We show that outward k-neighborliness is completely equivalent to the statement that, whenever y = Ax has a nonnegative solution with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum. Using this and classical results on polytope neighborliness we obtain two types of corollaries. First, because many [d/2]-neighborly polytopes are known, there are many systems where the sparsest solution is available by convex optimization rather than combinatorial optimization - provided the answer has fewer nonzeros than half the number of equations. We mention examples involving incompletely-observed Fourier transforms and Laplace transforms. Second, results on classical neighborliness of high-dimensional randomly-projected simplices imply that, if A is a typical uniformly-distributed random orthoprojector with n = 2d and n large, the sparsest nonnegative solution to y = Ax can be found by linear programming provided it has fewer nonzeros than 1/8 the number of equations. We also consider a notion of weak neighborliness, in which the overwhelming majority of k-sets of aj's not containing 0 span a face. This implies that most nonnegative vectors x with k nonzeros are uniquely determined by y = Ax. As a corollary of recent work counting faces of random simplices, it is known that most polytopes P generated by large n by 2n uniformly-distributed orthoprojectors A are weakly k-neighborly with k _~.558n. We infer that for most n by 2n underdetermined systems having a sparse solution with fewer nonzeros than roughly half the number of equations, the sparsest solution can be found by linear programming.
    Donoho, D., and J. Tanner, "Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming ", Proceedings of the National Academy of Sciences of the United States of America, 102, 9446–9451.
    Authors: Donoho
    Tanner
    Coders: Donoho
    Tanner
    Last update
    10/08/2012
    Ranking
    17
    Runs
    6
    Visits
    81
    On the Stability of the Basis Pursuit in the Presence of Noise
    Abstract
    Given a signal S ( R^N and a full-rank matrix D ( R^NL with N<L, we define the signal’s over-complete representation as a ( R^L satisfying S=Da. Among the infinitely many solutions of this under-determined linear system of equations, we have special interest in the sparsest representation, i.e., the one minimizing ||a||0. This problem has a combinatorial flavor to it, and its direct solution is impossible even for moderate L. Approximation algorithms are thus required, and one such appealing technique is the basis pursuit (BP) algorithm. This algorithm has been the focus of recent theoretical research effort. It was found that if indeed the representation is sparse enough, BP finds it accurately. When an error is permitted in the composition of the signal, we no longer require exact equality S=Da. The BP has been extended to treat this case, leading to a denoizing algorithm. The natural question to pose is how the abovementioned theoretical results generalize to this more practical mode of operation. In this paper we propose such a generalization. The behavior of the basis pursuit in the presence of noise has been the subject of two independent very wide contributions released for publication very recently. This paper is another contribution in this direction, but as opposed to the others mentioned, this paper aims to present a somewhat simplified picture of the topic, and thus could be referred to as a primer to this field. Specifically, we establish here the stability of the BP in the presence of noise for sparse enough representations. We study both the case of a general dictionary D, and a special case where D is built as a union of orthonormal bases. This work is a direct generalization of noiseless BP study, and indeed, when the noise power is reduced to zero, we obtain the known results of the noiseless BP.
    Donoho, D., and M. Elad, "On the Stability of the Basis Pursuit in the Presence of Noise ", Signal Processing , 86 , 511-532.
    Authors: Donoho
    Elad
    Coders: Donoho
    Elad
    Last update
    10/08/2012
    Ranking
    29
    Runs
    N.A.
    Visits
    63

    Other Companion Sites on same paper

    Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices

    Other Companion Sites relative to similar papers

    Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming
    Abstract
    Consider an underdetermined system of linear equations y = Ax with known d*n matrix A and known y. We seek the sparsest nonnegative solution, i.e. the nonnegative x with fewest nonzeros satisfying y = Ax. In general this problem is NP-hard. However, for many matrices A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. In classical convex polytope theory, a polytope P is called k-neighborly if every set of k vertices of P span a face of P. Let aj denote the j-th column of A, 1<=_j<=_n, let a0 = 0 and let P denote the convex hull of the aj . We say P is outwardly k-neighborly if every subset of k vertices not including 0 spans a face of P. We show that outward k-neighborliness is completely equivalent to the statement that, whenever y = Ax has a nonnegative solution with at most k nonzeros, it is the nonnegative solution to y = Ax having minimal sum. Using this and classical results on polytope neighborliness we obtain two types of corollaries. First, because many [d/2]-neighborly polytopes are known, there are many systems where the sparsest solution is available by convex optimization rather than combinatorial optimization - provided the answer has fewer nonzeros than half the number of equations. We mention examples involving incompletely-observed Fourier transforms and Laplace transforms. Second, results on classical neighborliness of high-dimensional randomly-projected simplices imply that, if A is a typical uniformly-distributed random orthoprojector with n = 2d and n large, the sparsest nonnegative solution to y = Ax can be found by linear programming provided it has fewer nonzeros than 1/8 the number of equations. We also consider a notion of weak neighborliness, in which the overwhelming majority of k-sets of aj's not containing 0 span a face. This implies that most nonnegative vectors x with k nonzeros are uniquely determined by y = Ax. As a corollary of recent work counting faces of random simplices, it is known that most polytopes P generated by large n by 2n uniformly-distributed orthoprojectors A are weakly k-neighborly with k _~.558n. We infer that for most n by 2n underdetermined systems having a sparse solution with fewer nonzeros than roughly half the number of equations, the sparsest solution can be found by linear programming.
    Donoho, D., and J. Tanner, "Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming ", Proceedings of the National Academy of Sciences of the United States of America, 102, 9446–9451.
    Authors: Donoho
    Tanner
    Coders: Donoho
    Tanner
    Last update
    10/08/2012
    Ranking
    17
    Runs
    6
    Visits
    81
    A Unified Software Framework for Empirical Gramians
    Abstract
    A common approach in model reduction is balanced truncation, which is based on gramian matrices classifiying certain attributes of states or parameters of a given dynamic system. Initially restricted to linear systems, the empirical gramians not only extended this concept to nonlinear systems, but also provide a uniform computational method. This work introduces a unified software framework supplying routines for six types of empirical gramians. The gramian types will be discussed and applied in a model reduction framework for multiple-input-multiple-output (MIMO) systems.
    Himpe, C., "A Unified Software Framework for Empirical Gramians", Institute for Computational and Applied Mathematics at the University of Muenster.
    Authors: Himpe
    Ohlberger
    Coders: Himpe
    Last update
    02/05/2013
    Ranking
    9999
    Runs
    N.A.
    Visits
    N.A.
    LSD: A Fast Line Segment Detector with a False Detection Control
    Abstract
    We propose a linear-time line segment detector that gives accurate results, a controlled number of false detections, and requires no parameter tuning. This algorithm is tested and compared to state-of-the-art algorithms on a wide set of natural images.
    Grompone von Gioi, R., "LSD: A Fast Line Segment Detector with a False Detection Control", IEEE Transactions on Pattern Analysis and Machine Intelligence, 32, 722-732.
    Authors: Grompone von Gioi
    Jakubowicz
    Morel
    Randall
    Coders: Grompone von Gioi
    Last update
    10/08/2012
    Ranking
    13
    Runs
    61
    Visits
    280
    Neighborliness of Randomly-Projected Simplices in High Dimensions
    Abstract
    Let A be a d by n matrix, d < n. Let T = T^n-1 be the standard regular simplex in R^n. We count the faces of the projected simplex AT in the case where the projection is random, the dimension d is large and n and d are comparable: d ~ dn, d in (0, 1). The projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of R^n. We derive ?N( d) > 0 with the property that, for any ? < ? N( deter), with overwhelming probability for large d, the number of k-dimensional faces of P = AT is exactly the same as for T, for 0<=k<= ?d. This implies that P is [?d]-neighborly, and its skeleton Skel[? d] ( P) is combinatorially equivalent to Skel[?d] (T). We display graphs of ?N. We also study a weaker notion of neighborliness it asks if the k-faces are all simplicial and if the numbers of k-dimensional faces fk(P) >= fk(T)(1-e). This was already considered by Vershik and Sporyshev, who obtained qualitative results about the existence of a threshold ? VS(d) > 0 at which phase transition occurs in k/d. We compute and display ?VS and compare to ?N. Our results imply that the convex hull of n Gaussian samples in R^d, with n large and proportional to d, ‘looks like a simplex’ in the following sense. In a typical realization of such a high-dimensional Gaussian point cloud d~ dn, all points are on the boundary of the convex hull, and all pairwise line segments, triangles, quadrangles, …, [?d]-angles are on the boundary, for ?<? N(d/n). Our results also quantify a precise phase transition in the ability of linear programming to find the sparsest nonnegative solution to typical systems of underdetermined linear equations; when there is a solution with fewer than ?VS(d/n)d nonzeros, linear programming will find that solution.
    Donoho, D., and J. Tanner, "Neighborliness of Randomly-Projected Simplices in High Dimensions ", Proceedings of the National Academy of Sciences of the United States of America , 102, 9452-9457.
    Authors: Donoho
    Tanner
    Coders: Donoho
    Tanner
    Last update
    10/08/2012
    Ranking
    19
    Runs
    3
    Visits
    52
    Cross-Gramian Based Combined State and Parameter Reduction
    Abstract
    An accepted model reduction technique is balanced truncation, by which negligible states of a linear system of ODEs are determined by balancing the systems controllability and observability gramian matrices. To be applicable for nonlinear system this method was enhanced through the empirical gramians, while the cross gramian conjoined both gramians into one gramian matrix. This work introduces the empirical cross gramian for square Multiple-Input-Multiple-Output systems as well as the (empirical) joint gramian. Based on the cross gramian, the joint gramian determines, in addition to the Hankel singular values, the parameter identifiability allowing a combined model reduction, concurrently reducing state and parameter spaces. Furthermore, a controllability and an observability based combined reduction method are presented and the usage of empirical gramians is extended to parameter reduction in (Bayesian) inverse problems. All methods presented are evaluated by numerical experiments.
    Himpe, C., "Cross-Gramian Based Combined State and Parameter Reduction", WWU Muenster.
    Authors: Himpe
    Ohlberger
    Coders: Himpe
    Last update
    02/05/2013
    Ranking
    9999
    Runs
    N.A.
    Visits
    N.A.
    The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising
    Abstract
    Let X_0 be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y_1, ... , y_n of X_0, where y_i = Trace(a_i' X_0) and each a_i is a M by N matrix. For measurement matrices with Gaussian i.i.d entries, it known that if X_0 is of low rank, it is recoverable from just a few measurements. A popular approach for matrix recovery is Nuclear Norm Minimization (NNM): solving the convex optimization problem min ||X||_* subject to y_i=Trace(a_i' X) for all 1<= i<= n, where || . ||_* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction \delta(n,M,N) = n/(MN), rank fraction \rho=r/N and aspect ratio \beta=M/N. Specifically, a curve \delta^* = \delta^*(\rho;\beta) exists such that, if \delta > \delta^*(\rho;\beta), NNM typically succeeds, while if \delta < \delta^*(\rho;\beta), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, where an unknown M by N matrix X_0 is to be estimated based on direct noisy measurements Y = X_0 + Z, where the matrix Z has iid Gaussian entries. It has been empirically observed that, if X_0 has low rank, it may be recovered quite accurately from the noisy measurement Y. A popular matrix denoising scheme solves the unconstrained optimization problem min || Y - X ||_F^2/2 + \lambda ||X||_*. When optimally tuned, this scheme achieves the asymptotic minimax MSE, M( \rho ) = \lim_{N-> \infty} \inf_\lambda \sup_{\rank(X) <= \rho * N} MSE(X,\hat{X}_\lambda). We report extensive experiments showing that the phase transition \delta^*(\rho) in the first problem (Matrix Recovery from Gaussian Measurements) coincides with the minimax risk curve M(\rho) in the second problem (Matrix Denoising in Gaussian Noise): \delta^*(\rho) = M(\rho), for any rank fraction 0 < \rho < 1. Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive semidefinite N by N matrices, of various ranks. Different predictions M(\rho) of the phase transition location were used in the two different cases, and were validated by the experimental data.
    Gavish, M., "The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising", Stanford University.
    Authors: Donoho
    Gavish
    Montanari
    Coders: Gavish
    Last update
    02/15/2013
    Ranking
    9999
    Runs
    8
    Visits
    N.A.
    High-Dimensional Centrally-Symmetric Polytopes With Neighborliness Proportional to Dimension
    Abstract
    Let A be a d by n matrix, d < n. Let C be the regular cross polytope (octahedron) in Rn. It has recently been shown that properties of the centrosymmetric polytope P = AC are of interest for finding sparse solutions to the underdetermined system of equations y = Ax; [9]. In particular, it is valuable to know that P is centrally k-neighborly. We study the face numbers of randomly-projected cross-polytopes in the proportional dimensional case where dn, where the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of Rn. We derive ?N(d) > 0 with the property that, for any ? < ?N(d), with overwhelming probability for large d, the number of k-dimensional faces of P = AC is the same as for C, for 0 k d. This implies that P is centrally bdc-neighborly, and its skeleton Skel[? d](P) is combinatorially equivalent to Skel[? d]©. We display graphs of ?N. Two weaker notions of neighborliness are also important for understanding sparse solutions of linear equations: facial neighborliness and sectional neighborliness [9]; we study both. The weakest, (k, e)-facial neighborliness, asks if the k-faces are all simplicial and if the numbers of k-dimensional faces fk(P) >= fk(C)(1 - e). We characterize and compute the critical proportion ?F (d) > 0 at which phase transition occurs in k/d. The other, (k, e)- sectional neighborliness, asks whether all, except for a small fraction epsilon, of the k-dimensional intrinsic sections of P are k-dimensional cross-polytopes. (Intrinsic sections intersect P with k-dimensional subspaces spanned by vertices of P.) We characterize and compute a proportion ?S(d) > 0 guaranteeing this property for k/d ~ ? < ?S(d). We display graphs of ?S and ?F.
    Donoho, D., "High-Dimensional Centrally-Symmetric Polytopes With Neighborliness Proportional to Dimension", Discrete & Computational Geometry, 35, 617-652.
    Authors: Donoho
    Coders: Donoho
    Last update
    10/08/2012
    Ranking
    16
    Runs
    3
    Visits
    33
    A Unified Software Framework for Empirical Gramians
    Abstract
    A common approach in model reduction is balanced truncation, which is based on gramian matrices classifiying certain attributes of states or parameters of a given dynamic system. Initially restricted to linear systems, the empirical gramians not only extended this concept to nonlinear systems, but also provide a uniform computational method. This work introduces a unified software framework supplying routines for six types of empirical gramians. The gramian types will be discussed and applied in a model reduction framework for multiple-input-multiple-output (MIMO) systems.
    Himpe, C., "A Unified Software Framework for Empirical Gramians", Institute for Computational and Applied Mathematics at the University of Muenster.
    Authors: Himpe
    Ohlberger
    Coders: Himpe
    Last update
    02/20/2013
    Ranking
    9999
    Runs
    N.A.
    Visits
    N.A.
    Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman
    Abstract
    Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN), where the observed noisy image f is related to the underlying true image u by f = u + η, and η is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV) regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen's book. We focus here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising.
    Getreuer, P., "Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman", Image Processing On Line, 2012.
    Authors: Getreuer
    Coders: Getreuer
    Last update
    10/08/2012
    Ranking
    12
    Runs
    9
    Visits
    99
    Optimal Stability Polynomials for Numerical Integration of Initial Value Problems
    Abstract
    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step size and corresponding method for a given problem when the spectrum of the initial value problem is known. The problem is expressed in terms of a general least deviation feasibility problem. Its solution is obtained by a new fast, accurate, and robust algorithm based on convex optimization techniques. Global convergence of the algorithm is proven in the case that the order of approximation is one and in the case that the spectrum encloses a starlike region. Examples demonstrate the effectiveness of the proposed algorithm even when these conditions are not satisfied.
    Ketcheson, D., and A. J. Ahmadia, "Optimal Stability Polynomials for Numerical Integration of Initial Value Problems", arXiv.org.
    Authors: Ketcheson
    Ahmadia
    Coders: Ketcheson
    Ahmadia
    Last update
    12/01/2012
    Ranking
    43
    Runs
    N.A.
    Visits
    47
    The Optimal Hard Threshold for Singular Values is 4/sqrt(3)
    Abstract
    We consider recovery of low-rank matrices from noisy data by hard thresholding of singular values, where singular values below a prescribed threshold \lambda are set to 0. We study the asymptotic MSE in a framework where the matrix size is large compared to the rank of the matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant. The AMSE-optimal choice of hard threshold, in the case of n-by-n matrix in noise level \sigma, is simply (4/\sqrt{3}) \sqrt{n}\sigma \approx 2.309 \sqrt{n}\sigma when \sigma is known, or simply 2.858\cdot y_{med} when \sigma is unknown, where y_{med} is the median empirical singular value. For nonsquare m by n matrices with m \neq n, these thresholding coefficients are replaced with different provided constants. In our asymptotic framework, this thresholding rule adapts to unknown rank and to unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, no matter what the matrix is that we are trying to recover, and is always better than ideal Truncated SVD (TSVD), which truncates at the true rank of the low-rank matrix we are trying to recover. Hard thresholding at the recommended value to recover an n-by-n matrix of rank r guarantees an AMSE at most 3nr\sigma^2. In comparison, the guarantee provided by TSVD is 5nr\sigma^2, the guarantee provided by optimally tuned singular value soft thresholding is 6nr\sigma^2, and the best guarantee achievable by any shrinkage of the data singular values is 2nr\sigma^2. Empirical evidence shows that these AMSE properties of the 4/\sqrt{3} thresholding rule remain valid even for relatively small n, and that performance improvement over TSVD and other shrinkage rules is substantial, turning it into the practical hard threshold of choice.
    Gavish, M., and D. Donoho, "The Optimal Hard Threshold for Singular Values is 4/sqrt(3)", Stanford University.
    Authors: Donoho
    Gavish
    Coders: Gavish
    Donoho
    Last update
    05/30/2013
    Ranking
    9999
    Runs
    17
    Visits
    N.A.
    Adaptive Estimation of Vector Autoregressive Models with Time-Varying Variance: Application to Testing Linear Causality in Mean
    Abstract
    Linear Vector AutoRegressive (VAR) models where the innovations could be unconditionally heteroscedastic and serially dependent are considered. The volatility structure is deterministic and quite general, including breaks or trending variances as special cases. In this framework we propose Ordinary Least Squares (OLS), Generalized Least Squares (GLS) and Adaptive Least Squares (ALS) procedures. The GLS estimator requires the knowledge of the time-varying variance structure while in the ALS approach the unknown variance is estimated by kernel smoothing with the outer product of the OLS residuals vectors. Different bandwidths for the different cells of the time-varying variance matrix are also allowed. We derive the asymptotic distribution of the proposed estimators for the VAR model coefficients and compare their properties. In particular we show that the ALS estimator is asymptotically equivalent to the infeasible GLS estimator. This asymptotic equivalence is obtained uniformly with respect to the bandwidth(s) in a given range and hence justifies data-driven bandwidth rules. Using these results we build Wald tests for the linear Granger causality in mean which are adapted to VAR processes driven by errors with a non stationary volatility. It is also shown that the commonly used standard Wald test for the linear Granger causality in mean is potentially unreliable in our framework (incorrect level and lower asymptotic power). Monte Carlo and real-data experiments illustrate the use of the different estimation approaches for the analysis of VAR models with time-varying variance innovations.
    Raïssi, H., "Adaptive Estimation of Vector Autoregressive Models with Time-Varying Variance: Application to Testing Linear Causality in Mean", IRMAR-INSA and CREST ENSAI.
    Authors: Patilea
    Coders: Raïssi
    Last update
    10/08/2012
    Ranking
    58
    Runs
    9
    Visits
    169
    Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
    Abstract
    This codes and data generates the figures in the paper "Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry" by Shu-Tao Xia, Xin-Ji Liu, Yong Jiang, Hai-Tao Zheng.
    Liu, X., "Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry", Ideas.
    Authors: Liu
    Coders: Liu
    Last update
    09/21/2013
    Ranking
    9999
    Runs
    N.A.
    Visits
    N.A.
    Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise
    Abstract
    Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
    Donoho, D., M. Elad, and V. Temlyakov, "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise", Transactions on Information Theory, 52.
    Authors: Donoho
    Elad
    Temlyakov
    Coders: Donoho
    Elad
    Temlyakov
    Last update
    10/08/2012
    Ranking
    14
    Runs
    10
    Visits
    89
    On the Stability of the Basis Pursuit in the Presence of Noise
    Abstract
    Given a signal S ( R^N and a full-rank matrix D ( R^NL with N<L, we define the signal’s over-complete representation as a ( R^L satisfying S=Da. Among the infinitely many solutions of this under-determined linear system of equations, we have special interest in the sparsest representation, i.e., the one minimizing ||a||0. This problem has a combinatorial flavor to it, and its direct solution is impossible even for moderate L. Approximation algorithms are thus required, and one such appealing technique is the basis pursuit (BP) algorithm. This algorithm has been the focus of recent theoretical research effort. It was found that if indeed the representation is sparse enough, BP finds it accurately. When an error is permitted in the composition of the signal, we no longer require exact equality S=Da. The BP has been extended to treat this case, leading to a denoizing algorithm. The natural question to pose is how the abovementioned theoretical results generalize to this more practical mode of operation. In this paper we propose such a generalization. The behavior of the basis pursuit in the presence of noise has been the subject of two independent very wide contributions released for publication very recently. This paper is another contribution in this direction, but as opposed to the others mentioned, this paper aims to present a somewhat simplified picture of the topic, and thus could be referred to as a primer to this field. Specifically, we establish here the stability of the BP in the presence of noise for sparse enough representations. We study both the case of a general dictionary D, and a special case where D is built as a union of orthonormal bases. This work is a direct generalization of noiseless BP study, and indeed, when the noise power is reduced to zero, we obtain the known results of the noiseless BP.
    Donoho, D., and M. Elad, "On the Stability of the Basis Pursuit in the Presence of Noise ", Signal Processing , 86 , 511-532.
    Authors: Donoho
    Elad
    Coders: Donoho
    Elad
    Last update
    10/08/2012
    Ranking
    29
    Runs
    N.A.
    Visits
    63
    logo

    Didn't find your answer ?

    captcha refresh

    Frequently Asked Questions


    There isn't any question about this code.