A Concept in Classical Fourier Analysis and its similarity to Compressed Sensing using minimum total variation.

The concept presented in this question on classical Fourier analysis, we show how it has some similarities to Compressed sensing using minimum total variation?

In this problem, for a BV function f(t), we give an alternate formula for Fourier series reconstruction using first N Fourier coefficients, denoted as P^f_N which has advantage over traditional Fourier partial sum S^f_N in the sense that P^f_N converges to f under the metric d(x,y) = ||x-y||_{L^1} + |TV(x)-TV(y)| while S^f_N does not, as TV(S^f_N) shoots to \infty, in the case of f having atleast one jump.
Here I give a possible analogy with compressive sensing using minimum total variation.

Compressive Sensing using minimum Total variation

Let x_o be the vector to be measured, we assume gradient of x_o is sparse.
We get best solution x_i such that \lambda TV(x) + ||Ax-y|| is minimum for x = x_i, where TV is total variation.

Our Problem

In our problem, although it is not quite the same problem as compressive sensing, it has two striking similarities with it.

1. Here our signal is continuous time signal, hence if we have to say that our signal has a sparse gradient, best thing is to say that f'(t) has minimum support which at best is saying f(t) is a step function!

2. Here, instead of basis pursuit, we are addressing the problem of reconstruction using first N Fourier coefficients, while trying to keep TV minimum, as in the problem of CS.

If we use S^f_N as solution, so to decrease ||f-S^f_N||_{L^2}, we can increase N, but the problem is TV(S^f_N) shoots to \infty, making the solution not have minimum TV.

By using P^f_N as solution, as we increase N, we are hitting two birds at one shot, as ||f-P^f_N||_{L^2} decreases, and TV(P^f_N) does not blow up, but moreover TV(P^f_N) \to TV(f)
Isn’t this a much beautiful mathematically and promising concept/theory than CS (compressive sensing) for following reasons.

1. Deal with continuous time signals, also avoid gradient sparsity constraint (which is awkward especially, assuming many coefficients to be zero rather than low).

2. We do not do any awkward optimization, but give a deterministic formula for reconstruction.

My question is, whether we can develop a better theory than compressive sensing using the concept? Comments are appreciated.

 

Advertisements