OpenAI: ‘The New York Times Paid Someone to Hack Us’

edited February 28 in General

OpenAI accuses The New York Times of paying someone to hack OpenAI’s products. This was allegedly done to gather evidence for the copyright infringement complaint the newspaper filed late last year. This lawsuit fails to meet The Times' "famously rigorous journalistic standards," the defense argues, asking the New York federal court to dismiss it in part.


OpenAI believes that it took tens of thousands of attempts to get ChatGPT to produce the controversial output that’s the basis of this lawsuit. This is not how normal people interact with its service, it notes.

It also shared some additional details on how this alleged ‘hack’ was carried out by this third-party.

“They were able to do so only by targeting and exploiting a bug […] by using deceptive prompts that blatantly violate OpenAI’s terms of use. And even then, they had to feed the tool portions of the very articles they sought to elicit verbatim passages of, virtually all of which already appear on multiple public websites.”

Journos trying not be be scum challenge(impossible). They will do whatever it takes to shift the narrative in their favor, even if it involves criminal activity. I hope they get fucked in court by the full extent of law.

https://torrentfreak.com/openai-the-new-york-times-paid-someone-to-hack-us-240227/

Thanked by (2)bikegremlin root

Comments

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @treesmokah said:

    OpenAI accuses The New York Times of paying someone to hack OpenAI’s products. This was allegedly done to gather evidence for the copyright infringement complaint the newspaper filed late last year. This lawsuit fails to meet The Times' "famously rigorous journalistic standards," the defense argues, asking the New York federal court to dismiss it in part.


    OpenAI believes that it took tens of thousands of attempts to get ChatGPT to produce the controversial output that’s the basis of this lawsuit. This is not how normal people interact with its service, it notes.

    It also shared some additional details on how this alleged ‘hack’ was carried out by this third-party.

    “They were able to do so only by targeting and exploiting a bug […] by using deceptive prompts that blatantly violate OpenAI’s terms of use. And even then, they had to feed the tool portions of the very articles they sought to elicit verbatim passages of, virtually all of which already appear on multiple public websites.”

    Journos trying not be be scum challenge(impossible). They will do whatever it takes to shift the narrative in their favor, even if it involves criminal activity. I hope they get fucked in court by the full extent of law.

    https://torrentfreak.com/openai-the-new-york-times-paid-someone-to-hack-us-240227/

    Yup. Though, as far as I know, the AI did and does use the existing web articles (not just from the NY Times) to do its work and answer user prompts. And it does so on a massive scale (way beyond what a human could do).

    Google, with their Gemni AI was a bit more prudent. They struck a deal with Reddit. So, it would be logical to conclude they will use Reddit's user agreement to avoid any similar legal battles.

    I've been playing and experimenting with AI since the start of this year. It is both awesome and scary at the same time (when abused, which is often). It is changing the Internet (and many other industries), and we haven't got a solution for it for now. This is just a start.

    My 2c about Google's struggles and (their) AI:
    https://io.bikegremlin.com/33131/is-google-in-trouble/

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • Telling an AI model to do something isn't "hacking".

    Thanked by (1)bikegremlin
  • @MallocVoidstar said: Telling an AI model to do something isn't "hacking".

    testing a technology and trying to find its limits and breaking points is absolutely "hacking" - but in most cases not criminal.

    Thanked by (1)bikegremlin
  • @someTom said:

    @MallocVoidstar said: Telling an AI model to do something isn't "hacking".

    testing a technology and trying to find its limits and breaking points is absolutely "hacking" - but in most cases not criminal.

    This is true. But in this case it is about the integrity and dignity of journalism. Where are the principles of journalism within New York Times if they resort to something like this?

    Sadly we live some strange times in which many journalists (which should be impartial) shake hands with politicians. Meanwhile true journalism is sent to prison for disclosing dirty laundry of taxpayers money (see Julian Assange case).

    Thanked by (1)bikegremlin

    How are you... online?

  • Are we all reading the same article? How is NYT in the wrong here? They didn't try to trick ChatGPT to generate controversial material for clickbait; what they did was probing the ChatGPT for proof that OpenAI had use their articles verbatim to train the model. Crawling the whole internet without any consent or agreement ain't no fair use.

  • @RachelMcAdams said:
    Are we all reading the same article? How is NYT in the wrong here? They didn't try to trick ChatGPT to generate controversial material for clickbait; what they did was probing the ChatGPT for proof that OpenAI had use their articles verbatim to train the model. Crawling the whole internet without any consent or agreement ain't no fair use.

    thank you. OP are just talking shit out of their arse defending billionaries for free.

    also,
    https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516
    https://gist.github.com/jahtzee/5d02b310b1d39b047664bec20a9be17c

    oh no, now you can do naughty naughty :^)

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • edited March 1

    @RachelMcAdams said: They didn't try to trick ChatGPT to generate controversial material for clickbait; what they did was probing the ChatGPT for proof that OpenAI had use their articles verbatim to train the model.

    OpenAI tells the opposite, and I don't trust "journalists" with shit.

    @RachelMcAdams said: Crawling the whole internet without any consent or agreement ain't no fair use.

    How come the whole "web" has worked like that since forever? How do you imagine Google, Bing, Yandex,.. works? I don't see any "copyright" lawsuits against them.
    For all I know the "consent" is respecting robots.txt, and prove OpenAI didn't.

    @Encoders said: thank you. OP are just talking shit out of their arse defending billionaries for free.

    I'm not defending anyone, I present the story of a company with smaller "voice" than a multi-billion mainstream media corporation filled with liars and extremists, exclusively owned and managed by people with "heritage".

  • @treesmokah said: OpenAI tells the opposite, and I don't trust "journalists" with shit.

    I don't blindly trust journalists and especially their managements either, but their integrity has nothing to do with the topic of the lawsuit here. The NYT has certain rights regarding their original contents, and whether these rights were violated or not is above my pay grade. All I see what they are doing is gathering actual proof for their lawsuits (frivolous or not). As it currently stands, the NYT has successfully shown that ChatGPT could spit out verbatim snippets or whole articles.

    @treesmokah said: How come the whole "web" has worked like that since forever? How do you imagine Google, Bing, Yandex,.. works? I don't see any "copyright" lawsuits against them.

    Intention and consent matters. Clearly there are differences between crawling for the sake of archiving, or for indexing the web, or for training LLMs, albeit the underlying technology might be the same.

    Also scale matters. See Authors Guild vs Google (2015) if you want "copyright lawsuits against them".

    I hate both sides equally and I don't have a horse to bet on in this race, but I do acknowledge the legitimacy of the lawsuit.

    Thanked by (2)skorous wankel
Sign In or Register to comment.