Arianity's BestNetTech Profile

Arianity

About Arianity

Arianity's Comments comment rss

  • Dec 28, 2025 @ 05:21am

    And your examples for that, frankly, suck because you don’t understand what you’re tlaking about. When someone checks my ID in person I hand it to them, they look, and then hand it back. Nothing is stored, transmitted, copied, routed, reviewed, handled, retrieved, audited, or checked.
    This was explicitly covered in the previous post: That doesn’t have the same level of risk as online (due to storage/hacking risk etc), but it absolutely fails the “having someone’s rights impacted” test. It fails the 10% threshold, too.
    It’s a simple transaction with virtually no risks.
    There is in fact risk (risks that the article's author has acknowledged previously, and that have come up in SCOTUS cases on the issue). Not the same level of risks, as I already mentioned, but there is risk. More importantly however, even without those risks, the act itself is still a violation of rights. Even if it never leaks it is a violation of rights to speech/privacy. It is not the exactly same, yes. But it is a violation of rights. You weren't making an argument based on "the risks in this particular case aren't justified", but how "we never do that". Those are two different arguments, ones that I've explicitly covered why the former is justifiable and the latter isn't. If you want to swap to the latter, as I've said that is justifiable (and not only that, I literally agree with it), but it is not the argument you made initially.
    You’re advocating an incredibly easy to abuse law to ‘protect’ children from something that every parent can already do.
    No, I'm not. I explicitly said I'm not, so why are you lying about it?
    You’re fucking stupid for not seeing this, and I have no time for your condescending bullshit
    If I was that stupid, it should be very easy to actually address the point (and not repeat points I already made, or make up points I'm not making), instead of blustering with insults. Insults you apparently had plenty of time to write. This isn't a time issue. The problem is you still wanted to circlejerk about being more correct. Now you're stuck with a counterexample you don't have an argument for, and so the next best thing is insults. If you don’t want to be condescended to, don’t act like a child. The discussion was perfectly respectful until you resorted to empty insults to avoid an inconvenient counterexample. You're capable of better.

  • Dec 26, 2025 @ 03:18pm

    I suppose I should preemptively add to that last bit-yes those are more extreme examples to reinforce the broader point that rights are not completely inviolate. If you'd prefer to stick to more comparable examples, I already gave some above.

  • Dec 26, 2025 @ 02:56pm

    You can’t say “I care about rights” but also “fuck those rights cuz people are lazy” and expect people take you seriously.
    I explicitly didn't say "fuck those rights". I said it needs to be balanced, and then I gave you existing examples where we do in fact balance those rights with other things. And yes, I expect you to take that seriously, instead of just yelling rights as a trump card and strawmanning, because that's how a society of adults living in a reality of trade offs works.
    That’s not how it works.
    It literally is how society works, and we know that because there are literally existing uncontroversial examples of it working like that. You may not think this particular trade off is worth it, but we do in fact make policy trade offs with rights all the time. If you're going to argue otherwise, you need to grapple with those counterexamples (and preferably, understand why those counterexamples exist the way they do).
    Parents can solve this problem right now, and choose not to.
    Yes, and people can solve things like exceptions to the First Amendment by not defaming people, or making true threats. They choose not to. Ditto for the 2nd, etc. That statement by itself is meaningless.

  • Dec 26, 2025 @ 01:57am

    You start out with “I don’t care about the constitution”
    No, I didn't, stop strawmanning. I explained, in detail, that I care about the Constitution, but that we also balance that care with other concerns. With examples of existing uncontroversial policy, to boot. The fact that the Constitution is not absolute does not mean people do not care about it.
    move on to walking back your claims
    Name one. I didn't walk back a single claim.
    while also not understanding a word I wrote. Congrats. You’re still legally, intellectually, and technologically illiterate.
    Totally, which is why you won't mention a single actual detail. I understood what you wrote, I just don't agree with it and explained why, and you don't like that. Instead of actually engaging with an argument you disagree with like an adult, you're swapping to personal insults so you can still feel vindicated. If I was actually illiterate you'd be shoving the how/why down my throat right now. Insults are just the next best thing when you can't.

  • Dec 26, 2025 @ 12:40am

    You’re ignoring the entire premise of this article, that these laws impact others. Nothing else fucking matters. You can’t just decide that 10% of the population having their fundamental rights violated is acceptable.
    I'm not ignoring it, it just wasn't relevant. It's something I've engaged with on these articles, and I do acknowledge. But, yes, we do in fact do this sort of trade off. Other things do matter, even when it impacts fundamental rights. At the end of the day, what matters is what does the most good. Impacting people's rights is extremely bad on the "does the most good" side of things, but we do in fact (rarely) restrict people's rights. Literally, drivers licenses restrict people's rights to freedom of movement. We also e.g. check IDs in person for adult services. That doesn't have the same level of risk as online (due to storage/hacking risk etc), but it absolutely fails the "having someone's rights impacted" test. It fails the 10% threshold, too. There are a million other examples. Limiting people's rights is not something to be done lightly, but it is something that is done.
    It’s fucking easy to do.
    I didn't say it wasn't easy. I said most parents don't do it, and that includes parents who are active/involved with their kids. It being 'easy' doesn't matter if people don't do it.
    The reason parents don’t do it now is because it’s NOT. A. FUCKING. PROBLEM. It’s just fucking not. OR they believe it’s a problem and are lazy and incompetent. Neither are enough of an excuse to impact other people.
    If people want to argue that kids accessing adult material isn't a problem, they're free to make that argument. These EFF articles don't. They're also free to argue that it's not worth trading people's rights for. That's totally fine, too. However, if part of that argument includes supposedly less painful alternatives like existing parental tools, that discussion needs to include the downsides, too. Where it is a problem is when people use parental tools as a cheap way to dodge taking unpopular stances like saying it's not actually a problem. (To be clear, not accusing the previous commenter of this.)
    You’re trying to use user-focused design principles to argue that it’s OK to impact the rights of other people because shit isn’t perfect.
    What I'm arguing is when evaluating policy, what matters is what ends up doing the most good in practice. And to do that, things like uptake matter. If you aren't considering how people actually respond to a policy, you're going to design shit policy (this goes for age ID laws as well). This is not solely a UI/UX problem, many of these tools have reasonable UI/UX and still don't get used. There's more factors, including yes, a culture of parents that normalizes laziness on the issue.
    If you give access to the internet to a child and don’t monitor it, you’re guilty of neglect.
    I agree. However, the odds of getting any sort of neglect law is basically zero, and that factors in. I would actually prefer putting more responsibility on parents. But just because I want it doesn't mean it's going to happen. And for the record, I do not like current age ID laws like this, and have spoken out against them. I don't think the good outweighs the harms, as is. But I can not like ID laws while also not wanting to use existing parental tools as an easy (but nonexistent in practice) out.

  • Dec 25, 2025 @ 11:36pm

    Discrimination against the poor, minorities, women, LGBTQ folk, etc. is either an intention or a happy incidental bonus for policies like these.
    Eh, it often is intentional, but sometimes it's also just collateral damage. Oftentimes policy will disproportionately affect the most vulnerable, given that they are by definition harder to reach. You can see that with e.g. the driver's licenses rates themselves. Even in states that aren't trying to suppress the vote and do homeless outreach, you still see disparities.
    If you can’t do something effectively without hurting people, you shouldn’t be doing anything at all.
    There are very very few (no?) policies that don't let literally anyone slip through the cracks. Some are smaller than others, but it's pretty much inevitable you're going to hurt some people. You just have to factor in that miss rate as part of the cost of the policy when weighing whether the policy outweighs status quo. It should weigh heavily, though.

  • Dec 25, 2025 @ 11:08pm

    it’s that tools already exist for this problem and many parents lack either the intuition or willingness to use them.
    The thing is, whether you can get parents to use them is a part of whether it's a good solution or not. If you have a good solution but can't get people to actually do the thing, in some sense it's not actually a real solution. If parents won't/can't use them (or at least there's no reasonable proposal to get them to), the tools may as well not exist. You kind of have to design policy around the citizens you have, not the citizens you wish you had. You see it in other areas of policy too, like vaccine mandates for schools. Parents have the tools to get their kids vaccinated, but we know some won't, so we end up designing the policy to account for that.
    Are the tools perfect? No.
    I don't think we need to expect perfection, but there's a big gap between perfection and essentially negligible uptake. And there doesn't really seem to be any sign of it meaningfully improving, or anyone who has a good idea on how to improve it. Having parents use existing tools would be a way better solution hypothetically, but until someone figures out a way to get them to actually use them in practice at scale, it's kinda moot. And you have to be careful about pointing to a hypothetical solution that will never materialize, and therefore ultimately ends up just being a way to propagate the status quo.

  • Dec 24, 2025 @ 11:06pm

    to an area where 1201 is invoked over something that isn’t even behind a paywall.
    Requiring a paywall to get 1201 protections seems like it'd set up a bad incentive to push more things behind paywalls. If you force people to pick between having something fully open or fully protected, with no in between, you're going to push some people who would prefer to be open but protected into a closed garden. And we'll all be worse off for it

  • Dec 24, 2025 @ 05:46pm

    To be fair, it's kind of both. There are some people who are using AI for things, and also companies like Microsoft shoving it into everything without asking if people want it (or even if they don't), in pursuit of growth. Sometimes it's even the same person. I have been dabbling a bit, and I have to admit AI has been useful for coding. At the same time, I would really like Microsoft/Firefox etc to stop shoving things down my throat. If I want an AI feature I will turn it on myself.

  • Dec 24, 2025 @ 02:48pm

    If Google succeeds here, what stops every major website from deciding they want licensing revenue from the largest scrapers?
    Fair use. Literally, that's what stops it. 1201 has an explicit fair use carve out. What SerpApi is doing is not fair use.
    But pulling up the ladder after you’ve climbed it isn’t protection—it’s rent-seeking.
    Rent seeking is when someone gets payment in excess of the value they're providing. If it costs, say $1million to scrape the entire web, but it costs only $100 to scrape the scraper, it isn't rent seeking for the former to want to not get undercut by someone just copying their work without putting in the effort. SerpApi is still free to scrape the web and build it's own index, it just can't steal someone else's.
    It’s that they chose to fight using a legal weapon that, if successful, fundamentally changes how we understand access to the open web
    I don't really see the issue. While 1201 can be abused, this is a case it seems designed for, and doesn't have to bomb the rest of the web. While Google built up it's index out of public works, that index is itself transformative and should itself be copyrightable. You can build a piece of work out of public data, that is itself protected. Like, if you write a book analyzing say The Odyssey, your book can be copyrighted work even if The Odyssey is itself public domain. People are free to analyze The Odyssey themselves, they can even incorporate parts of yours in their own works via fair use, but they can't just copy/paste yours.
    Google has the resources to solve this problem through better engineering or by raising the actual cost of evasion high enough that SerpApi’s business model fails
    If big companies had the ability to stop scraping simply by nerding harder, they would do so and scraping would be dead. It's not actually that easy to do, while also maintaining a relatively open ecosystem at scale for actual users.

  • Dec 23, 2025 @ 10:54pm

    Here’s a fun game the Trump administration keeps playing: destroy a successful government program, wait a few months, then breathlessly announce you’ve “invented” the exact same thing but with obvious corruption mechanisms baked in.
    One of the big lessons I think we need to take away from the Trump era, is how to advertise existing government programs. Most people had no idea that USDS/18f even existed. It's something liberals took for granted (or even looked at negatively, as crass or 'propaganda'), but it seems like there's a huge opening to just... explain to people what various parts of the government do, and how it benefits them. A more informed citizenry is probably less likely to blow everything up. It feels kind of dumb to advertise something like USDS like a used car salesman would, but it sure does seem to be resonate for a certain kind of voter.

  • Dec 23, 2025 @ 10:19pm

    that everyone has the financial means to throw money at something but they choose not to so someone must be loosing out.
    I don't think he's assuming everyone has the financial means. There will be things they don't have the financial means to expand, and simply wouldn't get done (or will be done smaller scope). In this particular example with Larian, they do happen to have the means, but that won't always be the case. The issue with replacing labor is still there. Even if you assume some fixed budget of $x, whatever money is going towards AI tools could've gone towards labor instead (which, to be clear, you might get less return on). It's a fundamental trade off, at any budget; not unique to expansion.
    Assuming everyone has money to burn which is an idea taken from la-la land.
    I think you're mixing two different takes. Acknowledging that something requires labor to be done is different than assuming someone has the money to do get it done manually. The former is true even if the latter isn't. I think he makes this pretty clear in the next paragraph? The entire discussion around indies etc is precisely the idea that things can be enabled, that wouldn't otherwise be viable.
    And it’s here the fallacy strikes, the assumption that someone using an AI is depriving someone else of work which is exactly the same reasoning the copyright mafia is using
    The difference in the assumption is there isn't a way around it. If you want the thing done, someone has to put in the labor. If it isn't, then the thing won't get made. That part is true regardless of whether someone has the financial means or not. In your software analogy, the assumption breaks because someone who pirated may not have actually converted to a sale.

  • Dec 23, 2025 @ 07:52pm

    Sorry, it wasn't the entire story, just the video attached to the story. (This is, to be clear, a tongue-in-cheek joke, if the tone is not clear)

  • Dec 23, 2025 @ 05:30pm

    The use of copyrighted materials? That’s completely ethical on top of probably being legal.
    It is likely legal (at least in the U.S. The EU is more complicated with Article 4, it has explicit opt-outs). A lot of anti-AI people would disagree with you that it's ethical, though. It is one of the major complaints about the technology.

  • Dec 23, 2025 @ 01:39pm

    Timothy, you are using exactly the same reasoning as they do and to that I say: FUCK YOU! Do better.
    They're not the same reasoning. Someone who pirates could choose not to buy something, at least hypothetically. To get a task done, your choices are not doing the task, or hiring someone to do it. There is no route, even hypothetical, of it being done without someone doing the labor. He also addresses whether that can be offset on net in literally the next paragraph as well. But even when it does increase productivity for specific tasks/output, that is still labor lost.

  • Dec 23, 2025 @ 01:27pm

    Supporting BestNetTech means supporting a news organization that won’t kill stories to please anyone in power. Not now, not ever.
    Except Amy Klobuchar, apparently.

  • Dec 22, 2025 @ 10:35pm

    We might as well make the project influencing how it’s used, rather than if it’s used.
    Sure, but how? I'm not sure you can (hence the horse armor joke). It's going to be like other tools: Does it make more money (by making a bigger/better product, shaving cost, etc)? Then the industry will move towards it. There will be exceptions, but they will be niche. I do think a nuanced approach is best, but consumer behavior is hard (impossible?) to make nuanced. We can't even get the industry to behave when it comes to things that hurt consumers/workers, like crunch, predatory pricing, sexually harassing female employees etc. Heck, we already can't even get companies to use ethically sourced training data to begin with. And I don't know if you can regulate a nuanced use. To be clear, I don't think you can stop it. I think maximum outrage at most gets you a slightly larger speedbump. We're going to get whatever is market optimal regardless of whether it's good for consumers/workers or not. There's a reason big-time execs are positively giddy about AI, and it's not because of indie competition. Whatever influence we have is subordinate to the mighty dollar.
    All of this, all of it, relies on AI to be used in narrow areas where it can be useful, for real human beings to work with its output to make it actual art versus slop,
    One thing I worry about with concept art specifically, is how it could anchor things. An analogy I've seen used is it's like watching a movie based on a book, and then going back to read the book. The movie will tend to heavily influence how your brain pictures the book. We're kind of seeing this in other places already- people who use LLMs are starting to pick up speech mannerisms from them.

  • Dec 22, 2025 @ 02:45pm

    Two quick problems: one, Section 230 already exempts federal criminal law. It’s right there in section (e)(1). So to the extent this is supposedly about dealing with criminal behavior by platforms, you don’t need this change.
    Section (e)(1) says Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute. It does not exempt civil liability for criminal behavior, it only allows criminal statutes to be enforced. And she unfortunately did explicitly say civil: Establishing a “Bad Samaritan” carve-out that would deny immunity from civil liability to platforms that purposefully facilitate or solicit third-party content that violates federal criminal law

  • Dec 19, 2025 @ 08:44pm

    I’ve looked at the list of organizations supporting this bill, and I’ll be forwarding this article to some of the ones I’m most familiar with to see if they actually understand what their doing.
    If you have the time, I would highly recommend contacting your Congresspeople, as a part of that. It really matters, even for the bad ones.

  • Dec 19, 2025 @ 08:15pm

    Ah, I see why you're upset. You saw the word publisher and thought I was talking about "platforms vs publishers". That's not what I'm talking about (Mike is correct, the distinction does not exist). What you're missing is that distinction doesn't exist because websites are publishers, and 230 explicitly protects them for publishing. Mike explains this if read the linked Bluesky thread under "one claiming that Section 230 means you’re not a publisher" in the original article. To quote Mike: Section 230 not only allows (and actually encourages!) websites to be "publishers" it absolutely allows a website to say they are a publisher. That's actually the main point of Section 230 and why it was passed. Because @wyden & Chris Cox knew that internet services acted as publishers... but because of the nature of the internet, they would enable so much speech that they couldn't possibly be expected to review every bit of content for legal landmines. The goal of the law was to encourage sites to be publishers! And the method was to say "you're not liable as a publisher." But the entire point of 230 is "you can do traditional publishing activity, but without being held liable for their content, since the internet enables anyone to post anything on your site." This has been explored in a few cases, including the 9th Circuit in Barnes v. Yahoo, in which the court clearly says Yahoo receives Section 230 protections BECAUSE IT IS ACTING AS A PUBLISHER. If you prefer a BestNetTech article, Mike also explains it here: It’s an early decision that makes it clear Section 230 protects websites for their publishing activity of third-party content. It clearly debunks the completely backwards notion that you are “either a platform or a publisher” and only “platforms” get 230 protections. In Barnes, the court is quite clear that what Yahoo is doing is publishing activity, but since it is an interactive computer service and the underlying content is from a third party, it cannot be held liable as the publisher for that publishing activity under Section 230. The reason platform/publishers doesn't exist is because 230 protects websites as publishers, regardless of what type of publishing activity. (Sidenote, this is what that Zeran quote was saying. Zeran is not a random trusted expert, it's the original court case interpreting 230)

More comments from Arianity >>