It’s a start. 100% tax above $1bn is my preference.
Developer of PieFed, a sibling of Lemmy & Mbin.
It’s a start. 100% tax above $1bn is my preference.
Thing is, search bars are for typing in keywords, not urls.
Certain other federated reddit clones just have a ‘add remote community’ button on the communities list.
You’re asking a lot from a LinkedIn post
It’s been a consensus for decades
Let’s see about that.
Wikipedia lists http://www.robotstxt.org/ as the official homepage of robots.txt and the “Robots Exclusion Protocol”. In the FAQ at http://www.robotstxt.org/faq.html the first entry is “What is a WWW robot?” http://www.robotstxt.org/faq/what.html. It says:
A robot is a program that automatically traverses the Web’s hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.
That’s not FediDB. That’s not even nodeinfo.
Maybe the definition of the term “crawler” has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.
Fedidb isn’t doing anything like that so I’m a bit bemused by this whole thing.
lol FediDB isn’t a crawler, though. It makes API calls.
Anytime there is a open source “community edition” and a closed-source “enterprise edition” it’s pretty suspect. There will always be a temptation to make the community edition a bit crippled, to drive sales of the paid version.