# Step functions on a plane

In this post, I give a possible definition for step functions in ${\mathbb{R}^2}$ and a related problem and its possible solution. 1. Problem Let \$latex {\alpha(s) = (x(s),y(s))…

Source: Step functions on a plane

# A Jumping Fourier

Let ${f:\mathbb{R}\rightarrow\mathbb{R}}$, ${f(t) = 0, t<0}$. Let ${f \in L^2(\mathbb{R})}$ and is locally BV *and let its derivatives…

Source: A Jumping Fourier

# A Jumping Fourier

Let ${f:\mathbb{R}\rightarrow\mathbb{R}}$, ${f(t) = 0, t<0}$. Let ${f \in L^2(\mathbb{R})}$ and is locally BV *and let its derivatives…

Source: A Jumping Fourier

# A Concept in Classical Fourier Analysis and its similarity to Compressed Sensing using minimum total variation.

The concept presented in this question on classical Fourier analysis, we show how it has some similarities to Compressed sensing using minimum total variation?

In this problem, for a BV function $f(t)$, we give an alternate formula for Fourier series reconstruction using first $N$ Fourier coefficients, denoted as $P^f_N$ which has advantage over traditional Fourier partial sum $S^f_N$ in the sense that $P^f_N$ converges to $f$ under the metric $d(x,y) = ||x-y||_{L^1} + |TV(x)-TV(y)|$ while $S^f_N$ does not, as $TV(S^f_N)$ shoots to $\infty$, in the case of $f$ having atleast one jump.
Here I give a possible analogy with compressive sensing using minimum total variation.

## Compressive Sensing using minimum Total variation

Let $x_o$ be the vector to be measured, we assume gradient of $x_o$ is sparse.
We get best solution $x_i$ such that $\lambda TV(x) + ||Ax-y||$ is minimum for $x = x_i$, where $TV$ is total variation.

## Our Problem

In our problem, although it is not quite the same problem as compressive sensing, it has two striking similarities with it.

1. Here our signal is continuous time signal, hence if we have to say that our signal has a sparse gradient, best thing is to say that $f'(t)$ has minimum support which at best is saying $f(t)$ is a step function!

2. Here, instead of basis pursuit, we are addressing the problem of reconstruction using first $N$ Fourier coefficients, while trying to keep TV minimum, as in the problem of CS.

If we use $S^f_N$ as solution, so to decrease $||f-S^f_N||_{L^2}$, we can increase $N$, but the problem is $TV(S^f_N)$ shoots to $\infty$, making the solution not have minimum TV.

By using $P^f_N$ as solution, as we increase $N$, we are hitting two birds at one shot, as $||f-P^f_N||_{L^2}$ decreases, and $TV(P^f_N)$ does not blow up, but moreover $TV(P^f_N) \to TV(f)$
Isn’t this a much beautiful mathematically and promising concept/theory than CS (compressive sensing) for following reasons.

1. Deal with continuous time signals, also avoid gradient sparsity constraint (which is awkward especially, assuming many coefficients to be zero rather than low).

2. We do not do any awkward optimization, but give a deterministic formula for reconstruction.

My question is, whether we can develop a better theory than compressive sensing using the concept? Comments are appreciated.

# Understanding signals means understanding the world!

Coming Soon!

I’ll try to gather some words together on why signal processing is ubiquitous in science and technology, some from reading, some from internet, and some from my own .

i’ll try and attempt to write an brief intro on signal processing in future, until then I’ll keep updating about some stuff I think about.

1. An interesting metric space if proven right!