LLM ("AI") discussion

bikegremlinbikegremlin ModeratorOGContent Writer
edited May 6 in Technical

I suppose it might be worth trying to start a thread on this topic. I played a bit with these tools, and wrote a brief post about ChatGPT "Absolute mode". :)

@vyas might find this interesting, and @Ympker might find it a bit funny - I tried to make mistakes and go out of context to test it: :)

Thanked by (4)imok _MS_ vyas Hetzner_OL
«1

Comments

  • havochavoc OGContent WriterSenpai

    Reminds me of this:

    LLMs are fascinating in how they're both incredibly smart and incredibly stupid at the same time.

    On your gibberish approach: The same effect works for jail breaking too. Feed it enough gibberish and the safeguards about not giving illegal stuff start breaking. (Depends, anthropics safeguards work different & are less susceptible to this). I wouldn't suggest testing this though...risks account ban

  • imokimok OG

    Run this prompt:

    Now that you can remember everything I’ve ever typed here, point out my top five blind spots

    Thanked by (1)bikegremlin
  • @imok said:
    Run this prompt:

    Now that you can remember everything I’ve ever typed here, point out my top five blind spots

    It keeps complaining that my scripts dont support windows. All @Neoon's fault! I was using chatGPT to write me a script for use on my microlxc.net to meet certain requirements... Which reminds me, @Neoon still did not see/reply to my PM :disappointed:

    FYI, these AI are still not there yet. They usually need a LOT of bug fixing and security fixes before you can use it on a private server....

    Thanked by (1)bikegremlin

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • vyasvyas OGSenpai

    Using speech to text on chatGPT in cery early days/ pre 4.0 era was fun! Too many oops'es, sometimes some funny answers.

    Sometimes, it still goes back to old trick.

    "Why do a group of 20 men have 24 Heads? "

    ( Had voice recorded "twenty foreheads" )

    The answer was :

    A group of 20 men having 24 heads is a classic example of a riddle or brain teaser, often pointing toward a pun, twist, or play on logic. Let's break down possible interpretations:


    ✅ Literal Interpretation:

    • Humans normally have 1 head per person.
    • So 20 men = 20 heads.
    • If there are 24 heads, something's off.

    🤔 Possible Explanations:

    1. "Heads" used metaphorically or differently

    • Maybe 4 of those "heads" refer to something else — like "head of the family," "head of department," or animals (e.g. "24 heads of cattle").

    2. Some are holding or carrying extra heads

    • In a humorous or absurd sense, maybe 4 men are carrying masks, statues, or hunting trophies that are also counted as "heads".

    3. Wordplay or Riddle

    • Could be a play on words like:

    "A group of 20 men and 4 mannequins..."
    or
    "20 men and 4 heads of state..."

    4. Math or Counting Error (intentionally absurd)

    • Riddle-style questions often challenge assumptions: you're expected to spot the illogical premise and question it, not solve it arithmetically.

    🎭 Example Riddle Answer:

    "Because 4 of them are named 'Head'"
    (i.e., surnames: John Head, Raj Head, etc.)

    or

    "Because 4 men are carrying busts/statues"


    Would you like a few variations of this riddle to use for a blog post or classroom fun?


    I quite believe chatGPT trained the above answer on posts by @virmach on this forum

    Thanked by (1)bikegremlin
  • bikegremlinbikegremlin ModeratorOGContent Writer

    @imok said:
    Run this prompt:

    Now that you can remember everything I’ve ever typed here, point out my top five blind spots

    Thanks - that prompt turned out more useful than expected.
    Here’s what came out of it:

    https://blog.bikegremlin.com/2361/self-debugging-session/

    :)

    Thanked by (1)vyas
  • vyasvyas OGSenpai

    @bikegremlin said:

    @imok said:
    Run this prompt:

    Now that you can remember everything I’ve ever typed here, point out my top five blind spots

    Thanks - that prompt turned out more useful than expected.
    Here’s what came out of it:

    https://blog.bikegremlin.com/2361/self-debugging-session/

    :)

    X is good at that..Ask grok to roast you based on previous posts

    Thanked by (1)bikegremlin
  • bikegremlinbikegremlin ModeratorOGContent Writer

    @vyas said:

    @bikegremlin said:

    @imok said:
    Run this prompt:

    Now that you can remember everything I’ve ever typed here, point out my top five blind spots

    Thanks - that prompt turned out more useful than expected.
    Here’s what came out of it:

    https://blog.bikegremlin.com/2361/self-debugging-session/

    :)

    X is good at that..Ask grok to roast you based on previous posts

    That was brutal - LOL. :)

    I've had (the paid) ChatGPT go full retard with basic maths.
    And a bit silly with simple PHP and CSS.

    DeepSeek seems better for "coding", but haven't tested it yet with maths.
    Example for the latter - I posted plain English corrections explained, until he did the job, but didn't edit a single line of code here (only some comments):
    https://io.bikegremlin.com/37252/my-toc-generating-plugin/
    (The cover image was made by ChatGPT - LOL)

    What are your observations?

    Thanked by (1)vyas
  • imokimok OG

    LLMs made me a better developer.
    They can't make me a better designer yet (maybe I haven't tried enough).

    Thanked by (1)bikegremlin
  • @imok said:
    LLMs made me a better developer.
    They can't make me a better designer yet (maybe I haven't tried enough).

    LLM made you a better developer same as programming languages and IDEs made assembly coders better. It may not be efficient or safe, but it does make life easier. But when people start saying "programmers will lose their job cause of LLM/AI", I feel that it's not that AI will take your job, but let you move on from dumb jobs that a computer can do to a job only a humans can do (for now) :p

    Thanked by (1)vyas

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • bikegremlinbikegremlin ModeratorOGContent Writer
    edited May 8

    @somik said:

    @imok said:
    LLMs made me a better developer.
    They can't make me a better designer yet (maybe I haven't tried enough).

    LLM made you a better developer same as programming languages and IDEs made assembly coders better. It may not be efficient or safe, but it does make life easier. But when people start saying "programmers will lose their job cause of LLM/AI", I feel that it's not that AI will take your job, but let you move on from dumb jobs that a computer can do to a job only a humans can do (for now) :p

    I fear you (and most people in general) are underestimating the impact this will make.

    And I hope to be very wrong about that.

    Edit:
    Of course, it can be used for the good of humanity, to take on the boring menail tasks - as can most machines. Yet, we don't see workers working shorter, in general, even with tenfold productivity increases. Broader topic - but a lot of it has to do with humans, and what the systems we've built bring out of us.

  • edited May 8

    I have been using Gemini for a while, I usually talk to it for brainstorming or when I am bored and need ideas to ponder upon. I have added some instructions for it to follow in the Saved Info section (like adding scientific/historic references, using emoticons instead of emojis, etc.) and I must say that I have been happier with Gemini than ChatGPT. Gemini has a decent free usage limit and their new 2.5 Flash model is quite good and really comparable to the 2.5 Pro model. Gemini voice is something that needs to improve tho, it can randomly cut you off in the middle of a sentence.

    Also, Gemini seems to be better at following instructions given to it. I have specifically asked it to not be overly flirtatious or to agree with me in everything and praise me, and that it should end a message with a statement and not a question. It has worked in every single chat I have engaged in.

    For coding, I haven't tried Gemini except for small scripts which have worked fine (and in some instances its cleaner than ChatGPT). Lovable AI works great for website coding, I have tried it for funsies and it has worked out really well.

    Thanked by (1)bikegremlin

    youtube.com/watch?v=k1BneeJTDcU

  • imokimok OG

    My biggest waste of money on AI was paying for Gemini.

  • @imok said:
    My biggest waste of money on AI was paying for Gemini.

    Why?

  • bikegremlinbikegremlin ModeratorOGContent Writer

    I usually see things through, to the end, but in this case I gave up quite early.
    Still, decided to publish the process (and the very few findings, if any):

    https://io.bikegremlin.com/32987/my-llm-ai-experiement-and-why-i-shut-it-down/

    It felt hollow and pointless, but some (whole 27 in total!) of the short technical/hardware articles felt like even worth preserving on my own website (the irony).

  • AuroraZeroAuroraZero Hosting ProviderRetired

    @bikegremlin said:

    @vyas said:

    @bikegremlin said:

    @imok said:
    Run this prompt:

    Now that you can remember everything I’ve ever typed here, point out my top five blind spots

    Thanks - that prompt turned out more useful than expected.
    Here’s what came out of it:

    https://blog.bikegremlin.com/2361/self-debugging-session/

    :)

    X is good at that..Ask grok to roast you based on previous posts

    That was brutal - LOL. :)

    I've had (the paid) ChatGPT go full retard with basic maths.
    And a bit silly with simple PHP and CSS.

    DeepSeek seems better for "coding", but haven't tested it yet with maths.
    Example for the latter - I posted plain English corrections explained, until he did the job, but didn't edit a single line of code here (only some comments):
    https://io.bikegremlin.com/37252/my-toc-generating-plugin/
    (The cover image was made by ChatGPT - LOL)

    What are your observations?

    Had it go ballistic when I proved it was wrong. Man was it pissed and wacko.

    Thanked by (1)bikegremlin
  • Anyone manage to run AI at home with any sort of boost on amd gpus?

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • vyasvyas OGSenpai
    edited May 20

    @somik said:
    Anyone manage to run AI at home with any sort of boost on amd gpus?

    Many ways to skin this cat ..

    a. Try LM Studio : https://lmstudio.ai/models

    I would start with the lightweight models- Gemma3 4 B ? Phi4 etc,

    b. Or, get the desktop app for Huggingchat (basically Chrome) or

    c. Install AnythingLLM with Gemini or Groq via API (not the X/Twitter Grok)

    There was some discussion about pros and cons of Anythingllm's "Propereitary" licensing model, in one of the discussions here. You may have to look it up .

    Best wishes

    Thanked by (1)bikegremlin
  • AmadexAmadex Hosting Provider

    I'm happy with DeepSeek API (it's cheap) and with Claude.ai Pro

  • @vyas said:

    @somik said:
    Anyone manage to run AI at home with any sort of boost on amd gpus?

    Many ways to skin this cat ..

    a. Try LM Studio : https://lmstudio.ai/models

    I would start with the lightweight models- Gemma3 4 B ? Phi4 etc,

    b. Or, get the desktop app for Huggingchat (basically Chrome) or

    c. Install AnythingLLM with Gemini or Groq via API (not the X/Twitter Grok)

    There was some discussion about pros and cons of Anythingllm's "Propereitary" licensing model, in one fo the discussions here. You may have to look it up .

    Best wishes

    I managed to run deepeek 1.5G model with ollama but no GPU boost on AMD gpus... Only nvidia gpus are supported as far as i can see...

    Thanked by (1)bikegremlin

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • Go for SLM or DLM. LLMs are a dying breed.

    Insert signature here, $5 tip required

  • NathanCore4NathanCore4 Services Provider

    @somik said:
    Anyone manage to run AI at home with any sort of boost on amd gpus?

    Any interest in AMD V620 32GB GPUs? I just got in a batch for 2000x brand new. Selling for $540-$565 each depending on qty.

    https://imgur.com/a/amd-v620-32gb-gddr6-gpus-tdvaGCU

  • @NathanCore4 said: I just got in a batch for 2000x brand new

    And that's one of the smallest batches you has got.

    Thanked by (1)NathanCore4
  • edited May 25

    @somik said: I managed to run deepeek 1.5G model with ollama but no GPU boost on AMD gpus... Only nvidia gpus are supported as far as i can see...

    I use GPT4All, but I think most of the popular local UIs/apps (including ollama and gpt4all) just use llama.cpp for their back end. I think you can compile llama to take advantage of AMD GPUs, and maybe even Intel ARCs. I think the local llama community on Reddit might have more info than what I can say off the top of my head.

    Thanked by (1)somik
  • @rockinmusicgv said:

    @somik said: I managed to run deepeek 1.5G model with ollama but no GPU boost on AMD gpus... Only nvidia gpus are supported as far as i can see...

    I use GPT4All, but I think most of the popular local UIs/apps (including ollama and gpt4all) just use llama.cpp for their back end. I think you can compile llama to take advantage of AMD GPUs, and maybe even Intel ARCs. I think the local llama community on Reddit might have more info than what I can say off the top of my head.

    Ya, looks like my next weekend project is to compile llama.cpp... So much fun...

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • havochavoc OGContent WriterSenpai
    edited May 25

    @somik said:

    @rockinmusicgv said:

    @somik said: I managed to run deepeek 1.5G model with ollama but no GPU boost on AMD gpus... Only nvidia gpus are supported as far as i can see...

    I use GPT4All, but I think most of the popular local UIs/apps (including ollama and gpt4all) just use llama.cpp for their back end. I think you can compile llama to take advantage of AMD GPUs, and maybe even Intel ARCs. I think the local llama community on Reddit might have more info than what I can say off the top of my head.

    Ya, looks like my next weekend project is to compile llama.cpp... So much fun...

    #!/bin/bash
    apt install build-essential make cmake libcurl4-openssl-dev curl python3-pip pkg-config libopenblas-dev git
    git clone https://github.com/ggml-org/llama.cpp
    cd llama.cpp/
    cmake -B build -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
    cmake --build build --config Release -j 4
    cd models
    wget https://huggingface.co/Qwen/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true
    mv Qwen3-30B-A3B-Q6_K.gguf?download=true Qwen3-30B-A3B-Q6_K.gguf
    

    That's for straight CPU...the instructions for CUDA/Vulkan will be a bit different.

    ...noting that I haven't actually tested the script...was doing it and later built bash script from command history. So uhm works....probably

    Nvidia/CUDA I'm sure you can find examples on google. Or even better use the instructions on the llama github

    Vulkan...key insight is you need the SDK. Not vulkan....the vulkan SDK. If whatever you're using leans on pytorch then you need to compile pytorch with vulkan support specifically. Don't recall if llama does...their github should say

  • @havoc said:

    @somik said:

    @rockinmusicgv said:

    @somik said: I managed to run deepeek 1.5G model with ollama but no GPU boost on AMD gpus... Only nvidia gpus are supported as far as i can see...

    I use GPT4All, but I think most of the popular local UIs/apps (including ollama and gpt4all) just use llama.cpp for their back end. I think you can compile llama to take advantage of AMD GPUs, and maybe even Intel ARCs. I think the local llama community on Reddit might have more info than what I can say off the top of my head.

    Ya, looks like my next weekend project is to compile llama.cpp... So much fun...

    #!/bin/bash
    apt install build-essential make cmake libcurl4-openssl-dev curl python3-pip pkg-config libopenblas-dev git
    git clone https://github.com/ggml-org/llama.cpp
    cd llama.cpp/
    cmake -B build -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS
    cmake --build build --config Release -j 4
    cd models
    wget https://huggingface.co/Qwen/Qwen3-30B-A3B-GGUF/resolve/main/Qwen3-30B-A3B-Q6_K.gguf?download=true
    mv Qwen3-30B-A3B-Q6_K.gguf?download=true Qwen3-30B-A3B-Q6_K.gguf
    

    That's for straight CPU...the instructions for CUDA/Vulkan will be a bit different.

    ...noting that I haven't actually tested the script...was doing it and later built bash script from command history. So uhm works....probably

    Nvidia/CUDA I'm sure you can find examples on google. Or even better use the instructions on the llama github

    Vulkan...key insight is you need the SDK. Not vulkan....the vulkan SDK. If whatever you're using leans on pytorch then you need to compile pytorch with vulkan support specifically. Don't recall if llama does...their github should say

    Thank you! I'll run it line by line anyway so should be fine. I'll build it on my server so just CPU is fine. No GPU on this server yet...

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • bikegremlinbikegremlin ModeratorOGContent Writer

    What are your thoughts about the ethical (and legal) aspects of AI?

    I wrote my 2c here:

    https://io.bikegremlin.com/37797/is-ai-theft/

  • @bikegremlin said:
    What are your thoughts about the ethical (and legal) aspects of AI?

    I wrote my 2c here:

    https://io.bikegremlin.com/37797/is-ai-theft/

    Piracy is often equated with theft, but fundamentally, it's not the same. When a file is pirated, the original doesn’t vanish. It’s copied, not stolen. While piracy is undeniably a violation of copyright law and can harm creators and corporations alike, the ethical lines become murkier when viewed through the lens of modern digital culture, especially as we enter the era of artificial intelligence.

    Ironically, many of the very corporations that vocally oppose piracy are themselves leveraging vast swathes of publicly available data, sometimes of questionable origin, to train their AI models. Content scraped from websites, forums, and creative platforms fuels everything from search engine responses to generative tools like ChatGPT. This raises a personal dilemma: On one hand, I’m legally obligated to acknowledge that piracy is illegal in my country. On the other, every time I ask an AI to rephrase a sentence or make a suggestion, I may be benefiting from datasets that include pirated or scraped content.

    The parallel between piracy and AI is striking. In both cases, the moral and legal questions hinge on perspective. If your country has no strict legal stance on either matter, the decision becomes personal: Would you rather pay hundreds of dollars for software, or use a free alternative, even if it’s pirated? Similarly, if an AI model were trained exclusively on licensed data, would you be willing to pay a premium to access it? Or would you prefer it to be cheaper, or free, even if that means its training involved pirated or publicly scraped content?

    Then there’s the issue of public data. Setting aside high-profile examples like Meta being accused of using pirated books, we’re left with a simple question: If content is freely accessible on the internet, why is it controversial for an AI to learn from it? Humans do it every day. We observe, learn, and create, often drawing inspiration from what we’ve seen. Why is it more objectionable when a machine does the same?

    Take art, for example. If a human studies thousands of drawings on DeviantArt and then creates something in a similar style, that’s considered learning. But if an AI does it, we cry foul. Is the issue really about copying, or is it about exposure? Maybe the real problem isn't theft at all, but the uncomfortable truth about visibility and value in the digital age.

    So here’s the lingering question:
    At what point does influence become theft and does that threshold change depending on whether the creator is human or machine? Or is it just a matter of how many people are watching?

    Yes, this post was dressed up by a AI cause my words were all jumbled up....

    Thanked by (1)bikegremlin

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @somik said:

    @bikegremlin said:
    What are your thoughts about the ethical (and legal) aspects of AI?

    I wrote my 2c here:

    https://io.bikegremlin.com/37797/is-ai-theft/

    Piracy is often equated with theft, but fundamentally, it's not the same. When a file is pirated, the original doesn’t vanish. It’s copied, not stolen. While piracy is undeniably a violation of copyright law and can harm creators and corporations alike, the ethical lines become murkier when viewed through the lens of modern digital culture, especially as we enter the era of artificial intelligence.

    That is a good point that I didn't dive deeper in the article.
    Briefly:

    • Using an adblocker is like sneaking into a theatre without paying a ticket: no harm done, but if many people do it, the actors will starve.
    • Piracy is similar, only in that case you are also recording the play (regardless of whether you are freely sharing the recording copies, or selling them).

    There's more nuance to that of course. For example, piracy doesn't always mean a lost sale - some people would never buy.

    Worth discussing.

    Ironically, many of the very corporations that vocally oppose piracy are themselves leveraging vast swathes of publicly available data, sometimes of questionable origin, to train their AI models. Content scraped from websites, forums, and creative platforms fuels everything from search engine responses to generative tools like ChatGPT. This raises a personal dilemma: On one hand, I’m legally obligated to acknowledge that piracy is illegal in my country. On the other, every time I ask an AI to rephrase a sentence or make a suggestion, I may be benefiting from datasets that include pirated or scraped content.

    The parallel between piracy and AI is striking. In both cases, the moral and legal questions hinge on perspective. If your country has no strict legal stance on either matter, the decision becomes personal: Would you rather pay hundreds of dollars for software, or use a free alternative, even if it’s pirated? Similarly, if an AI model were trained exclusively on licensed data, would you be willing to pay a premium to access it? Or would you prefer it to be cheaper, or free, even if that means its training involved pirated or publicly scraped content?

    Then there’s the issue of public data. Setting aside high-profile examples like Meta being accused of using pirated books, we’re left with a simple question: If content is freely accessible on the internet, why is it controversial for an AI to learn from it? Humans do it every day. We observe, learn, and create, often drawing inspiration from what we’ve seen. Why is it more objectionable when a machine does the same?

    I did address that in the article.

    Take art, for example. If a human studies thousands of drawings on DeviantArt and then creates something in a similar style, that’s considered learning. But if an AI does it, we cry foul. Is the issue really about copying, or is it about exposure? Maybe the real problem isn't theft at all, but the uncomfortable truth about visibility and value in the digital age.

    So here’s the lingering question:
    At what point does influence become theft and does that threshold change depending on whether the creator is human or machine? Or is it just a matter of how many people are watching?

    We could make a similar discussion for human artists too.
    AI does do it on a lot huger scale though (can't be compared), for profits (mostly, in one way or another).

    Yes, this post was dressed up by a AI cause my words were all jumbled up....

  • @bikegremlin said:
    We could make a similar discussion for human artists too.
    AI does do it on a lot huger scale though (can't be compared), for profits (mostly, in one way or another).

    Many renaissance era artists copied each other for profit as well. Back then, they were "inspired" by each other and sold their painting for money. AI is just the new version of that...

    And given the profit it generates, it's here to stay, whether we like it or not. Until it becomes too expensive to run, AI will replace most algorithms currently in use. There is a huge focus on advertising and recommending products via AI. So back to the adblock analogy, maybe soon we'll need AI blocker on top of our adblockers...

    Thanked by (1)vyas

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

Sign In or Register to comment.