For many computational problems we know some polynomial time algorithm,
say in quadratic time, but it is open whether faster algorithms exist.
In a big data world, it is essential to close this gap: If the true
complexity of the problem is indeed quadratic, then it is intractable on
data arising in areas such as DNA sequencing or social networks. On such
data essentially only near-linear time algorithms are feasible.
Unfortunately, classic hardness assumptions such as P!=NP are too coarse
to explain the gap between linear and quadratic time.
Fine-grained complexity comes to the rescue by providing conditional
lower bounds via fine-grained reductions from certain hard core
problems. For instance, it allowed us to rule out truly subquadratic
algorithms for the Longest Common Subsequence problem (used e.g. in the
diff file comparison tool), assuming a certain strengthening of P!=NP.
In this talk we will further discuss applications to Edit Distance,
Subset Sum, and RNA Folding.