The concept presented in this question on classical Fourier analysis, we show how it has some similarities to Compressed sensing using minimum total variation?
In this problem, for a BV function , we give an alternate formula for Fourier series reconstruction using first Fourier coefficients, denoted as which has advantage over traditional Fourier partial sum in the sense that converges to under the metric while does not, as shoots to , in the case of having atleast one jump.
Here I give a possible analogy with compressive sensing using minimum total variation.
Compressive Sensing using minimum Total variation
Let be the vector to be measured, we assume gradient of is sparse.
We get best solution such that is minimum for , where is total variation.
In our problem, although it is not quite the same problem as compressive sensing, it has two striking similarities with it.
1. Here our signal is continuous time signal, hence if we have to say that our signal has a sparse gradient, best thing is to say that has minimum support which at best is saying is a step function!
2. Here, instead of basis pursuit, we are addressing the problem of reconstruction using first Fourier coefficients, while trying to keep TV minimum, as in the problem of CS.
If we use as solution, so to decrease , we can increase , but the problem is shoots to , making the solution not have minimum TV.
By using as solution, as we increase , we are hitting two birds at one shot, as decreases, and does not blow up, but moreover
Isn’t this a much beautiful mathematically and promising concept/theory than CS (compressive sensing) for following reasons.
1. Deal with continuous time signals, also avoid gradient sparsity constraint (which is awkward especially, assuming many coefficients to be zero rather than low).
2. We do not do any awkward optimization, but give a deterministic formula for reconstruction.
My question is, whether we can develop a better theory than compressive sensing using the concept? Comments are appreciated.