Tuesday May 20 – 2003 :: I Want a New Girlfriend

Alright, this is my first post on here, so you’ll have to bear with me until i get used to it.

You probably have two questions. Who the hell am I and why am I posting on IWANGF? Well, my name is cman. Some of you may know me, but chances are most of you don’t. I am the creator/owner of the cman network. My goal is to be involved in as many sites as possible so as to make myself look more important haha.

The reason i am posting on here is because Jack is a very busy man and doesn’t always have time to post. So, he asked me to help out a bit.

I’m going to try to include some porn for you in all of my posts, don’t worry, but forgive me if you’ve seen it before. Its hard to find stuff that hasn’t been seen a million times.

Speaking of porn, I run a porn site of my own. Its actually a celeb site, featuring high quality pictures and videos of your favorite celebs. Check it out, www.exp0sed.com GO NOW!!!

You’ll also notice from time to time that I tend to rant a bit, stop me if I’m getting annoying! The most recent issue to piss me off is something I read today. I don’t know if you’ve heard or not, but apparently there is a plan to resurrect Napster. WHY?!?! Anyway, you can read the article here or you can read my rant on the subject here

Babilina’s Live Room | Fuck a Cam Girl – Live Cam Girls

Web wide crawl with initial seedlist and crawler configuration from March 2011. This uses the new HQ software for distributed crawling by Kenji Nagahashi.

What’s in the data set:

Crawl start date: 09 March, 2011
Crawl end date: 23 December, 2011
Number of captures: 2,713,676,341
Number of unique URLs: 2,273,840,159
Number of hosts: 29,032,069

The seed list for this crawl was a list of Alexa’s top 1 million web sites, retrieved close to the crawl start date. We used Heritrix (3.1.1-SNAPSHOT) crawler software and respected robots.txt directives. The scope of the crawl was not limited except for a few manually excluded sites.

However this was a somewhat experimental crawl for us, as we were using newly minted software to feed URLs to the crawlers, and we know there were some operational issues with it. For example, in many cases we may not have crawled all of the embedded and linked objects in a page since the URLs for these resources were added into queues that quickly grew bigger than the intended size of the crawl (and therefore we never got to them). We also included repeated crawls of some Argentinian government sites, so looking at results by country will be somewhat skewed.

We have made many changes to how we do these wide crawls since this particular example, but we wanted to make the data available “warts and all” for people to experiment with. We have also done some further analysis of the content.

If you would like access to this set of crawl data, please contact us at info at archive dot org and let us know who you are and what you’re hoping to do with it. We may not be able to say “yes” to all requests, since we’re just figuring out whether this is a good idea, but everyone will be considered.