This article details an interesting derivation of the closed form of the \(n^\text{th}\) fibonacci number.
I was reviewing some old exercises from my undergrad, and found a very interesting derivation that I thought would be worth sharing. From a mathematical perspective it amounts to relatively basic linear algebra, but from a programming perspective it demonstrates a non-iterative and non-recursive method of determining the \(n^\text{th}\) Fibonacci number.
The following is a basic statistical primer which can be used to help guide the reading of the rest of this blog. It touches upon basic statistical concepts such as bias, variance, probability spaces, random variables, etc.
In many of the derivations and discussions on this blog we will use notation and terminology common to probability theory and statistical learning theory. As such, I think it is fitting that I provide a basic statistical primer outlining that information. The discussion here will vary in brevity, as explanations will be given in further detail on separate posts when practical.
This is a follow up to the previous post detailing the basic mathematics behind persistent homology. A few basic applications will be demonstrated.
Now that we’ve dissected some of the basic algebraic facets of persistent homology in the previous post, we can apply these tools to some real data using a persistent homology package, Dionysus. In particular, the following data analysis was performed using the anaconda suite for Python 3. In our first example, we sample points from an annulus which we’ve randomly generate using the normalization of NumPy arrays. The annulus used looks like so:
A basic introduction to the mathematics of persistent homology. This will be followed up with a post detailing applications to data science and showing code used to compute the persistent homology of some selected structures.
When analyzing a data set, one is often interested in the notion of finding some sort of global structure from representations which may not portray such a structure without much coaxing or insight. The raw data that one usually encounters tends to be given as a set of observations in \(\mathbb{R}^n\) where \(n\) denotes some number of predictors. Of course, when \(n \geq 4\) the ability for humans to visualize and consider the geometric features of such data sets becomes extremely minimal. It is clear then that we require some specialized tools and techniques to assist us in this regard. This suite of techniques comes from the intersection of topology and data analysis called persistent homology.