Youtube's setup, where anyone can claim any video, and the first line of defense creators have is to plead innocent to the same party making the claim, is the most user hostile (creator hostile?) thing I've ever seen.
I've always found it funny that piracy still totally exists on youtube, but the pirates just add obfuscation techniques to prevent immediate takedowns.
Then you watch the "Important Videos" playlist (https://www.youtube.com/playlist?list=PL7XlqX4npddfrdpMCxBnN...) and half the videos are taken down because the account had too many strikes. It's really sad.
Funny in a terrible sort of way. The legacy copyright industry follows the same logic as various drug war participants - winning means a loss of leverage to ask for more power.
Same discrepancy between reality and rhetoric. Stated goal: eliminate all of a thing for moral reasons. Actual goal: eliminate the easiest majority of a thing for economic reasons.
I've been tuning into an episode South Park, Family Guy, American Dad or The Simpsons LIVE recently. They look like straight torrent uploads so curious how they are getting around the filters... They have ads, so money must be being made for anyone to pursue this.
It would be interesting to know how often the Streisand Effect is invoked intentionally as part of a "viral marketing campaign" or similar. At this it's well-understood enough that I'm sure there are people who are depending on the suppressed outlet to go public and "Streisand" their property for them.
I would've never heard of the single domain on this blocklist if not for this project. We should be careful that we're not accidentally handing a new marketing tool over by "Streisanding".
As the author notes, the Streisand effect at work. But more importantly, I am quite happy that someone actually decided to stand their ground and call the bluff of future malware distributors (sorry, advertising companies). I've seen the chilling effects the DMCA has had on reasonable discourse in the YouTube community, but it extending it to what people can block in their browsers is absolute insanity.
Believe e that's how it was originally maintained, off GitHub, was just a http link to a text file that gets imported(still the same but off GitHub now).
Just make the Ad blocking extension retrieve an extra file outside of github automatically(default on, but leave it configurable like the other filter lists in options) or I am sure there are custom lists based on easylist or other lists available already on the net that include/can include
But props to the creator of this, great ingenuity.
The author mentioned in the readme that having it in the Chrome Store makes it easy to quantify the damage it's doing to these sites.
Couldn't you achieve the same thing by tracking http requests to a regular list?
I'm not sure why I would use this extension because I already have the excellent ublock which I can add any list I want to. I think more people would benefit if this was published as a list instead of a totally new extension.
Agreed. If it were a list I'd gladly tick it over. But I am hesitant to add a new extension.
I understand the hesitation. A user has submitted a text-based version:
Funny many of those domains are just two dictionary words juxtapositioned.
I'm not sure how that would be done but I'm open to pull requests!
You just publish a set of rules like https://easylist-downloads.adblockplus.org/easylist.txt
Does this work for you? https://raw.githubusercontent.com/paulgb/BarbBlock/master/Ba...
That should work, you might want to add a link to "abp:subscribe?location=h||ps://raw.githubusercontent.com/paulgb/BarbBlock/master/BarbBlock.txt&title=BarbBlock" in your readme to allow people to easily install the list by clicking that link.
Edit: replaced https with h||ps to prevent it from showing up as a link (which are apparently cut off if they're too long).
Thanks, will do (https://github.com/paulgb/BarbBlock/issues/15)
Thanks for this! Added it to my own uBlock list.
I didn't work for me with Adblock Plus in Firefox.
Thanks for the report, I'll look into it (https://github.com/paulgb/BarbBlock/issues/16)
And the file could be published on IPFS so hosting it is not relevant. Hash would ensure that you're getting the right file - from any IPFS node. There is even an HTTP bridge for IPFS content.
That may be great as a parallel project, but the point of this one is to take head on legally any challenges. At least that's my reading of his explanation.
He's blazing a trail that others who may be more risk averse or have more to lose can then follow. Actually blocking these domains is kind of a side benefit, especially since the most likely people to download the extension are those who could just as easily achieve the end result themselves in other ways.
Unfortunately there is no legal concept of waiver in this situation: the other side can pick and choose weak targets at will and avoid someone who will fight back. Just one more critical flaw in the DMCA system. Contrast this with trademark law where one must universally enforce the mark or risk losing it entirely.
There are many HTTP bridges, I run one: https://www.eternum.io/
If you decide to use IPFS for distribution of your list, I'd be happy to donate some Eternum credits so you can pin it.
If you can distribute the hash to all users, you can probably distribute the blocklist as well. The size isn't the problem.
FYI: The uBlock origin link (to subscribe to the list) does not work (seem to just be a copy of the ABP link).
The raw version of the blocklist in this repo can be used that way.
I'm a fan of "Code as protest", but it seems like the more practical solution would be to simply have a separately maintained domain list that could be easily integrated into the adblockers that already used EasyList.
It seems to me that domain names jewelcheese.com and futuristicfairies.com could be put to much better use than they are by this company.
FYI there appear to be a much longer set of Domains owned by this company, all of the same format of nonsensical word pairings.
"sites which allege that they are legally required to be loaded if embedded in other sites."
In that case, maybe the best technical and legal solution is to block any sites that contain their domains completely. I.e. boycott anyone who does business with these people, without "violating" someone's interpretation of DMCA.
As a legal fine point: the list I want isn't just "sites which have used DMCA takedowns to force removal from other blacklists," but more like "sites which allege that they are legally required to be loaded if embedded in other sites."
This would include all Admiral-owned domains (including those that haven't been included in DMCA takedowns yet), and all domains owned by any other companies that believe there is some legal obligation to load their trackers. It's an important list to have.
Echoing other comments, this list should be in a standard .txt form so it can be included by other extensions, so I can pick an extension that does what I want when it encounters such a site (e.g., decline to visit the page that embeds the site).
you'd have to differentiation between a legitimate DMCA takedown vs a false claim...may be some NLP on the work being taken down to see if it falls under fair use?
I don't think this needs to be that complicated. If I'm understanding correctly, the DMCA takedowns in this case are against pages blocked by EasyList. You would just have to correlate DMCA takedowns with historical EasyList filters.
Next step is to just build a script that parses the DMCA takedown notices here and automatically builds a block list out of those domains.
Thanks, I plan to, the main reason I targeted Chrome at first is that I've heard the review period is in months for Firefox.
You can self host a signed extension in minutes. Just upload it and then download the signed extension from the developer portal and add it to the github releases for the project. https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Pu...
That's FUD. It's automated and takes minutes at tops.
Edit: in my experience.
This isn't true if you use uglify at all. I literally had to go into IRC and try and figure out why a hotfix for our extension was taking so long to get through the process (we had been sitting in it for at least 3 days if not a week IIRC). The response I got in IRC was that I would have to literally come and ask for someone's assistance to bump up the review on my company's extension each time we hotfixed, as we got placed in a special queue that required an admin and not just any volunteer in part due to our use of uglify causing the code to be obfuscated.
The worse part is that uglify is required for us to even get the extension signed, due to Firefox's arbitrary individual file size limit.
Prior to Mozilla getting their act together it literally was over a month of waiting for them to review our extension, so we just bailed altogether as their entire process is terrible for pretty much no real gain. I then tried again once the queue had dropped from ~400 to ~150 and the wait time finally became reasonable.
I mean the process was/is so bad, they literally had an issue on github to just automatically approve extensions based on some criteria due to the huge backlog. I cannot get an unlisted extension released with the same version as a listed extension, so we completely bailed on using Mozilla's hosting at all, since all it does is cause a liability until we can push out fixes the same day without manual intervention.
To top this off, every single time we make a new release, we'd have to explain the extension, how to build it, etc. and provide sources to the code. Our extension is also a regular web extension.
I'm curious to know what extension you're creating that's so massive it needs a minifier/obfuscator to be run on it so that it's small enough to fit under the file size limits. Also just to clarify, what is the file size you're being requested to be under? Are we talking kilobytes, or megabytes here, because a megabyte of JS is an awful lot of code in a single file.
Taking a look at mozilla's publishing guidelines they're quite clear that you can publish an addon in obfuscated or minified form, but that you need to provide them an unobfuscated/unminified copy of the of the source code to review as well as instructions on how you performed the obfuscation/minification (presumably to run it themselves and compare the output). All of that seems fairly reasonable since you don't want people distributing malware on AMO. One thing I don't see mentioned anywhere in there is a size limit on files though, so I'm very interested to hear what this apparently undocumented requirement is.
The file size limit is 4/4.5MB.
I work at Virtru. The short summary is that we make a client-side encryption extension for the content of your email. There are certainly things we can do to reduce the file size. However, the business value of doing so is limited, particularly in the context of an extension. It is also much more limited when Firefox itself is vastly less popular than Chrome today and we have no issues from Chrome.
We use webpack, so all our dependencies, which may not be optimized for file size for browser usage get pulled in as well.
As I previously stated, so not sure why I'm having to repeat this, that the entire process due to the minification creates a huge barrier for any business that is trying to fix things in a reasonable amount of time. You get thrown into an admin queue, which moves at a snail's pace, EVERY time you upload a new version you have to add instructions again, EVERY time there is a different reviewer with the same question you have to answer them again. Mozilla isn't sitting there actually reviewing your source code in an intelligent way. We were dinged on 2 uses of eval. What is hilarious the uses came from within very well known libraries, jQuery, and Bluebird. If they were really reading the source code, they would have known that we didn't write that, 1 of the uses we never call the function that contained it, and bluebird is using it as a de-optimization strategy to prevent a function optimization that make objects fast.
As far as I can see it takes over 5 days at least 50% of the time: https://discourse.mozilla.org/t/queue-weekly-status-2017-08-...
(E: and that's currently; go back a few months and it's nearly 90% taking over 10 days. Possibly well over 10 days. https://discourse.mozilla.org/t/queue-weekly-status-2017-05-...)
I'm curious. Does this statistic mix old XUL based add-ons with the new web extension standard addins?
As far as I can see the new web extensions based approached has much better tooling with automatic analysis and reviewing.
It doesn't show which one are web extensions. Even then, users can access the unreviewed versions of addons, IIRC.
No, it takes months, stop FUDing the FUD.
Last time I checked publishing and signing was even built into the tooling itself, making it much more seamless and CI-friendly than the Chrome counter-part.
And the amo team is super responsive on email.
What am I missing?
not sure about details, but AFAIK there's two steps: an automated review when you sign an extension. You then can distribute that extension yourself. But if you want it to be hosted on Mozilla Addons, it needs human review, which takes more time.
Reading their process and guidelines I'm guessing that human review is highly variable as well, mostly depending on the size and complexity of the addon. Something massive that's structured poorly is going to take a long time to review since the poor person doing the review needs to actually be able to understand the code. A small addon or one that's structured very cleanly on the other hand should be a fairly fast review.
Update: I've submitted it for review by Mozilla.
Considering all the commentary about how fast/slow the process is, please post an update when it gets approved, it will be very interesting to have another data point to reference.
It was approved in a matter of hours!
They must be reading this thread.
Nah, I've had extensions reviewed in a few hours
Firefox also supports webextensions.
Please consider publishing a version at addons.mozilla.org too.
The tool web-ext makes this almost effortless.
I've put together a list of sites using Admiral's services (the domains this extension blocks) here: https://gist.github.com/daumiller/114989e6967eb0d4c54b9ab9ff... .
I do not know how complete it may be. It was interesting putting the list together to see how most (but not all) of the users appear just as sleezy as Admiral itself.
(Blocking these sites automatically should be done based on requests pointing to the original list, rather than this derived one, but it's here for reference.)
That's a great technical achievement! How did you find all the sites containing Adimiral's library without crawling the whole web?
Can we go one step further and block websites which serve contents from these domains? This would be good first step towards eliminating toxic advertisements.
We'll give up a bit first but may win eventually.
Why not just distribute a list, like easylist, that can be added to existing extensions like uBlock Origin?
I wonder, why is it an extension for a specific browser instead of a blacklist that can be used in any browser that has a blocker?
As the claim is regarding circumventing access control, I doubt the specific technical means by which it is blocked is relevant - Admiral's interpretation of the DMCA considers the outcome (the domain is blocked), not that the list includes a domain which they own.
How about split the blacklist into multiple entities (like Shamir's Secret Sharing) with absolute no affiliation with each other.
Each one downloaded can not block anything
But if a user combines some of the data, certain website gets blocked.
This time I really don't understand why you've been downvoted. Now it feels like some powerful users know something I don't, but they won't tell
Because it was a response to a statement that the result matters more than the implementation details that consisted of achieving the same effect by cleverly modifying the implementation..
I really meant something like this.
Result matters, but who's to attack when it's in fact the user who mixes some theoretically unrelated things to achieve such result? Sadly I know it wouldn't work in practice, but that's how I understood that comment
Three separate services, each hosting segments of a file that have to be put back together in some way. Either those three sites will have information on how to rebuild the file (revealing the reason they're hosting it in the first place), or there'll be a 4th party involved that provides the information. It starts looking like an organized conspiracy, if a court ends up looking at it. And that's the answer: The target companies file suit against whatever sites seem to be involved in the conspiracy.
Either that, or it ends up being so obscure that the target company(ies) never notice.
I see it as a kind of reductio ad absurdum.
You have an extra "o" in your string :)
I don't think so... "functionalclam.com" both begins with "functio" and ends with "onalclam.com"
Use a blockchain to store it.
> used DMCA takedowns to force removal from other blacklists
Could a hashed tld blacklist help? Each person downloads a unique hashed tld blacklist. Browser would calculate tld against list of hashes (or bloomfilters for what's worth)
In this case, what if a rule says
- domain starts with "functio"
- domain ends with "onalclam.com"
- domain is no longer than 18 bytes.
Instead of cleartext?
I think it's intentional, because the whole idea is that these kinds of requests are invalid and an abuse of the DMCA, so if someone were to issue a request the author would fight it to prove its invalidity.
It's invalid use of the DMCA. If the author is willing to fight it is a great choice.
Are there any good non-US Git hosting companies?
Self hosted gitlab?
Git doesn't need hosting... It's distributed for a reason.
I assume the reason why GP is asking is beacuse they want to be able to distribute a repository, i.e have people know where it is. Without a git host then you need to run a http/ssh server from your computer, which isn't an option for many people. And then you'd still need a website to list a link to it, if you want people to find it at all. And that website is just as liable to receive a DMCA for listing the allegedly infringing content, just like torrent sites are.
Hosting such a project on Github, a US-based company which responds to DMCA requests, is perhaps not the most sensible choice.
Its not the businesses primary website that would normally be the target of blacklists. Its domains serving undesirable content like ads.
For samsung.com to be added to the list first an ad-blocker would have to list samsung.com THEN Samsung would have to use legal shenanigans to get samsung.com removed from blacklists THEN it would get added.
It seems pretty unlikely. Its likely in fact that if samsung had content worthy of blocking it would be served in a way that would be easy to block without blocking samsung.com. Example nonsense.samsung.com or samsung.com/whatever.
What if Samsung issues another bogus DMCA? Would you dare blacklist Samsung? If the inconvenience level is high enough, almost nobody would use the blacklist. This only works for small players.
As I note in the Github for this project, that list is woefully incomplete.
See my analysis at the Anti-Adblock Killer repo for more detail -
Admiral has thousands of domains across Google Cloud and AWS hosts.
Presumably they don't all point to unique IP addresses, so would blocking their IPs be more effective?
This seems to be a list of other admiral-owned domains: https://pgl.yoyo.org/adservers/admiral-domains.txt
This is one of the many situations where website owners,content creators and individuals being intimidated by using DMCA take down notice. Sometimes there are even fake notices as in here(https://www.eff.org/deeplinks/2016/10/samsung-sets-its-reput...).There should be proper checks to avoid misuse of DMCA take down notice.
To be honest? Because I was on a plane without internet, and I didn't know how to create a blacklist but I had another extension I'd created to use as reference :)
I'm not opposed to doing both, once I have an hour to sit down and figure out what is actually involved. There's an issue to track it here: https://github.com/paulgb/BarbBlock/issues/5
congratulations on using your flight so productively :) Thanks for filing the issue! I considered it, but didn't want to be pushy.
Someone has now contributed a list:
the original DCMA take-down was against a uBlock Origin list.
why make an entirely new plugin when you can simply make a new list? As cool as your idea is, I don't need two browser extensions to manage when the first one will happily incorporate your list.
Yep, but I can file a counter-takedown. I know the process from past experience.
awesome - it would be good to publicize this process, and also to post about those experiences. Far too many are intimidated into compliance without knowing their rights.
I'm rather shocked HN is not well versed in this.
Thats why it would make more sense to create a list (not a custom extension), and host that list outside of github on a non-US server, where DMCA takedown notices can just be safely ignored.
i think that has the bad side effect of not disputing invalid DMCA takedowns. Instead, we ought to be fighting these invalid DMCA takedowns, and increase the cost of making a false claim. In aggregate, it will make DMCA less abusable, but will require a tough fight. Making bad press for companies that make false claims won't hurt either.
Moving information to outside the US juristiction will only allow abusers to continue to harass those who can't move out.
DMCA is targeted at service provider. Unless the code is self hosted, there's a risk that GitHub can take it down to avoid lawsuits.
You would only use a DMCA takedown on a blacklist if you want to force your content to be viewed by others. That is bad acting. DMCA is to protect the rights of content owners from their content being spread maliciously, not for it to be blocked from viewing in an individual's private home or device.
Fair enough. But I was just accepting the OP's conflation of DMCA takedowns and cease-and-desist orders. The concern remains valid in the latter case (I'm not sure the DMCA could even legitimately be used in this rather odd, inverted way, and agree that it's hard to imagine a valid use). My point is that nothing I do to a blacklist could "force" anyone to view my content---but I might have a legitimate interest in correcting a blacklist's mischaracterization or miscategorization of my content.
This seems to beg an important question: are people who use legal means to remedy their inclusion on a blocklist necessarily doing so for nefarious reasons? Not everyone who's used a cease-and-desist or the DMCA process is a bad actor.
This extension isn't necessarily bad---if its purpose is simply to ensure that DMCA takedowns and cease-and-desist orders are properly supported and enforced only with good cause, then that seems valuable. If it ends up as a tool that starts making people who are legitimately trying to protect their livelihood or interests give up by making their legal remedies unenforceable or too onerous to undertake, then maybe we need something a little less cavalier.
The argument is not "they are spreading our trademark" but "they are circumventing our copy protection" and that would happen regardless of the way it is technologically done.
This is a huge attack on the way we all consume the web, because in the end it leads to a society where we are forced to consume ads to participate in the society.
Why not just switch to something like md5 for matching some domains that do this
Considering that the CEO of Admiral considers himself a "beloved VC" (literally his words), I'm guessing he'll keep stepping up his agressive tactics. Typical sociopathic behavior from the type that believes that money trumps everything and that everyone worships him.
I think your comment fell victim to an asterisk eating monster.
Kudos to the author for this. I wonder what the reaction of Admiral will be.
Besides web browser extensions, this sort of thing works nicely in firewalls that support DNSBL (DNS block lists). I've got pfSense running pfBlockerNG, and it whacks both incoming (known malicious) and outbound requests. As a result, anyone on my home network gets the same ad-blocking and other protection, including guests on the Wi-Fi. Nobody has to do anything special, it Just Works.
I take it as a good sign a player in the online ad industry is starting to squirm.
A much easier solution is to just add those domains to your hosts file. Then it'll work across any program or browser you use.
They tried to use DMCA to censor non-copyright-able information such as this? Ridiculous.
Thank you good Sir.
tl;dr I found a major ticketing vendor was vulnerable to data interception because of a faulty HTTPS setup. They issued a DMCA takedown of a private video I had sent them. I fought it and they threatened to sue but they backed down after the EFF sent a response on my behalf.
Things like this is why I continue to give to the EFF.
To be frank these videos should be public.
> This is not my first DMCA-takedown rodeo
What's the story?
I'm a privacy badger user and I just checked my list, I don't see this domain listed.
Privacy badger will only block a domain if it appears that it's tracking you. Sites linking to this JS, that don't try to send tracking info will not be blocked.
doesn't stuff like privacy badger generate these lists on the fly locally?
It looks like an anti-adblock measure:
> after the page finished loading. It removed the real page content and replaced it with a box asking me to whitelist the site.
The reason why they're using so many domains is to circumvent blacklisting, and then the DMCA takedowns are another layer of circumvention on top of that.
What is functionalclam?
GET RESULTS IN ONE HOUR TIME FOR ALL KIND OF EMAIL HACK.Change School Grades? Hack Banks? Erase Criminal Records? Hack Websites? Hack Database? Hack Drivers license? Hack Call Log? Hack Visichat and Flashchat Rooms? Hack FTP User and Pass? Hack Facebook,Whatssap,Twitter,Instagram,Webcam etc.Hack VB Forum? Hack Wordpress Blog? Hack CC any Country? Hack Money Booker Account? Hack Liberty Reverse Account? Hack Paypal Account? Root Server? By Pass Google Phone Verification? Install Red on Linux Server? Hash Crack? DDOS Server? Retrieval of Lost Files and Documents CONTACT: firstname.lastname@example.org
Don't blame GitHub, blame the DMCA!
I can still blame GitHub for not fighting against the DMCA with us.
I blame both. Even without any copyright violation GitHub takes the DMCA request.
You have to take the DMCA request or else you lose safe harbor.
the problem is when the safe-harbour entity doesn't actually try to protect their content creators, but blindly accepts any takedown. Youtube is notoriously obedient - to the point where it can threaten content creator's livelihood (see https://www.youtube.com/watch?v=QfgoDDh4kE0 for a famous one).
Doesn't matter - they have to accept the requests and sort it out later to retain safe harbor. That's just the way the law was set up. The whole point is that they claim neutrality - they are the carrier of the information, not screeners or judges of it. The legal back and forth is between the the entity posting the content and the entity submitting the DMCA takedown. After the fact, YouTube to Github can and should and sometimes do help fight back against abuse.
Is there any incentive or law that causes Github to have a DMCA web form?
If it's possible, then it would be a good act of protest to require individually signed (by an actual person with a pen, not a picture of a signature), individually snail-mailed requests. This would at least make the process take O(n) time and O(n) money, making it a bit harder to abuse.
Yes, by following DMCA procedures Github stops being liable for its users copyright infringement actions (safe harbor). Should Github not follow procedures defined in DMCA, Github would be on a hook.
I'm describing a form of malicious compliance. Is there a legal requirement that the request be a web form? Or is it allowed to provide only a mailing address?
It can be anything that provides a commercially reasonable way to contact the entity. So fax/p.o. box/email/web form etc all work.
How much flexibility do they have?
It's a real question, can they easily challenge some requests and maintain their safe harbor status?
It's not up to Github to challenge them. It's up to the person(s) responsible for the "violation". In this case, Github is just the messenger (AIUI).
My question is whether there is even a provision in the law that enables Github to challenge the notice.
Not if the notice follows the requirements of DMCA.
I mean, they could probably challenge requests that are obviously invalid like the Admiral one. http://lmgtfy.com/?q=can+you+copyright+a+domain+name (No, you can't)
If you think this is about copyright in a domain name you're missing the point. The DMCA is being used because uBlock is blocking a technology [script download] that is used to prevent access. uBlock is working like a DRM remover (the argument goes) and so is committing contributory copyright infringement.
Modifying uBlock mitigates the argument but doesn't entirely remove it. For example if the people who control uBlock control a third-party source that allows the same preventions to be implemented then technological changes have been made but the situation is legally homologous (AIUI, IANAL).
IMO uBlock need to provide facility for a domain to be blocked but say "search online to find blocklists" and have no legal associations with the blocklist maintainers; akin to how emulator sites manage ROMs, they stay as legal separated from them as possible. Putting Google in the middle makes getting sued harder, in theory Google is linking the people to the tech/info that enables the alleged infringement.
Remember DMCA is strongly weighted towards the accuser in the initial instance and that a service provider has to take down content in order to maintain their safe-harbour protections, leaving it to the alleged infringer to counter the accusation (guilt until claim of innocence).
I'm asking a specific question about the workings of the DMCA.
Does your link address that question? Is your opinion on the DMCA well informed?
The lmgtfy was meant towards github's lawyers, not you. My point was that this is so obviously invalid that they should absolutely have flexibility to challenge it.
Not first time, it's stupid how GitHub accepts anything as DMCA request.
That's a stupid idea. What if you ever wanted to start a revocation list, when a domain is added genuinely by mistake? You'd have to use the same priv key for signing those as well, at which point you might as well have a single text file with provenance (aka git)
Well, maybe that if it sounds stupid to you, you didn't think about it enough (that's usually the case).
Make a address which creates transactions containing the domains blocked. To find the list, check the transactions from this address. If an error is made, then you can just use an other address to build the list again without the domains you want to remove, and make adblocks look for transactions from this address. The adblockers only reference an address so they have no illegal content. Domains can't be removed from transactions. Adblockers can use an other address as basis if they agree too.
I wonder if there's ever been a better real-world illustration of Maslow's hammer at work.
So is your use of the term "Maslow's hammer". Using the block chain for maintaining a list of domains that is not subject to the whims of DMCA seems like a good use of the technology to me.
Then go ahead and build it. Maybe it'll get adopted like name coin by dozens of people. Actually, name coin was actually a decent idea because dns needs to be decentralized. Text files like this do not, at least no more than git already is.
or ... don't try to "but with blockchain!!" an idea with no use for it and which would be ridiculously infeasible and fragile with it.
So everytime there is an error on the list, all adblockers have to be re-released to start using the new address?
Doesn't sound very practical.
This can be automated quite easily : have an other address which transactions contain the last up to date source address for list. Now adblockers just need to look for the last transaction from this address.
Note that I'm not saying it's better than the current management from the github repos. Nothing beats plain text files. What I'm saying, though, is that if DCMA takedowns on those lists become something common, engraving those domains in blockchain is a good way to defend from it.
But most people here could have got to this conclusion taking 5 minutes to think about it. I get it, guys, you've heard enough of blockchains. Let's not answer to irrational outburst of feelings with outburst of feelings in the opposite direction, please (note to parent: this remark is for the whole discussion and the downvotes, not just for you).
Github can take down your single text file.
Nobody can take down your signed blockchain message that was published on bitcoin, unless they physically find you, and force you to sign a revoked list.
If it's a special-purpose blockchain they can just sue everyone who is participating in propagating the blockchain. If I were on their side, that's what I'd do. You can also sue everyone who distributes software that uses those particular blocks. And using BitCoin would be the wrong technical decision (and would still not protect people who distribute special-purpose software that extracts that information).
This is a situation where trying to use technical measures to try to get around a law is not a good idea. Aside from the fact that the DMCA is incredibly generic and states that "No person shall circumvent a technological measure that effectively controls access to a work protected under this title." So it doesn't matter /how/ it's circumvented, all that matters is whether it was circumvented. But more importantly, if you use a technology to try to get around copyright law (even if your usage is completely legal), you're just asking for the copyright lobby to attack you. Just look at what happened to BitTorrent.
You don't have to build special purpose software, though.
All you got to do is know the block number that it was published on.
Maybe you could even have the "speciality purpose software" published in the bitcoin blockchain itself.
It might still be "illegal", but the purpose is not to be 100% bullet proof.
The purpose is to make enforcement so expensive as to be impractical.
I am glad that you brought up bittorrent, though.
Bittorrent and torrents in general is a great example of how you can spend billions trying to enforce copyright law, and yet free Game of Thrones episodes are still a click away, for me.
Technical solutions to fighting DMCA have worked extraordinarily well.
The copyright lobby has massively failed to achieve its goals.
> You don't have to build special purpose software, though.
> All you got to do is know the block number that it was published on.
> Maybe you could even have the "speciality purpose software" published in the bitcoin blockchain itself.
Okay, and how is your web browser going to access that list? At some point there has to be a path from the "anonymous" blockchain to your browser, and that's who will get sued.
> The purpose is to make enforcement so expensive as to be impractical.
I still think you're trying to solve the wrong problem. Breaking the law doesn't help your cause. You need to challenge people who are abusing laws, because that's how you actually make a change in this arena.
> Bittorrent and torrents in general is a great example of how you can spend billions trying to enforce copyright law, and yet free Game of Thrones episodes are still a click away, for me.
> The copyright lobby has massively failed to achieve its goals.
That's an incredibly optimistic view. The copyright lobby has successfully managed to create the most pervasive DRM systems in existence thanks to the threat of "piracy", including EME. While a large number of people still torrent, the copyright lobby has managed to smear the entire technology. Who distributes their own content via BitTorrent? Almost nobody (distributions are the only example I can think of).
That doesn't sound like success from our side to me. In Australia, ISPs will DNS-block torrenting websites and also null-route any torrent traffic.
> Okay, and how is your web browser going to access that list?
Through blockchain.info, which is also the perfect tool for law enforcers to track possibly fraudulent money transactions, so they can't do without it.
> You need to challenge people who are abusing laws, because that's how you actually make a change in this arena.
Why choose? We can do both. EFF has expanded in parallel to TOR development. That's nothing uncommon. Plus, it's a huge argument for political groups to be able to say : "we can fight this through technical tools that we already have, but we want to find a peaceful solution with you". You don't negotiate when you can't achieve anything the other side doesn't want.
> Who distributes their own content via BitTorrent?
There's at least Blizzard that I noticed, I suppose others as well. The thing is that they don't advertise it - why would they? It's an implementation detail (built in their client). But I think of bittorrent legacy as way more than that. I'm not sure we would have had Spotify and Netflix without it. Sure, there are DRM, now. But music and movies are now affordable. Everybody wins, which is the desired end result.
Regarding ads, how could everybody win? There's one thing people made clear : they hate ads. So instead of fighting adblockers, ad industry should find a way to allow people to discover products without annoying them. For now, some prefer to fight adblockers with legal tools. We're entitled to answer with means just as aggressive, while still maintaining discussion channels with opposite side.
Please go and read the actual article.
What article? The readme of the of the github repos? I read it. If you have a point to make, please make it.
Wouldn't put those domains in a blockchain be the proper answer? It could not be "took down", then (if it's a solid one, like btc or eth).