Indexing the unknown
The story began when we started exploring the un-indexed parts of the internet: the dark web and deep web. There was no general index of what is online and where to find it, so we started creating a list of known sites. We quickly found a way to automate this and monitor when new sites, forums and marketplaces appear.
Archiving what’s necessary
The nature of information published in the dark web and deep web is very temporary; the information you seek might be gone in five minutes. With this in mind, we began archiving information as and when we saw it. This is why we can provide results that date back five years.
Not all information is equal. Our quest remains to make sure that we have access to those forums, black markets and leak platforms where the most important discussions take place. At the same time, we distil the information so that you can understand its magnitude and act upon it.