Within the field of machine learning, Random Projection (RP) is one of the simplest methods available to perform dimensionality reduction. This method relies on a classical result known as the Johnson-Lindenstrauss lemma, which states that a small set of points in a high dimensional feature space can be mapped into a space of much lower dimension in such a way that pairwise distances between the points are nearly preserved. Later on, it was proved that this mapping could be performed with a projection matrix whose elements are drawn from an extremely simple distribution [1], simplifying the projection computation to aggregate evaluation. However, due to the randomness introduced in the method during the construction of the projection matrix, the algorithm behaves non-deterministically. To illustrate this, Figure 1 shows the distribution of the stress measure over 200 runs of Random Projection on 500 samples from two typical machine learning datasets.