Two Months Free Google Gemini!

2

Comments

  • @dev_vps said:

    @Not_Oles said:

    @Neoon said:
    is API access included?

    I am guessing not. I haven't seen a reference to API. I looked at the FAQ. I looked at the Help. I asked Google Gemini whether my subscription had API access, and it gave me a link to what looks like the Help page for the Winklevoss twins Gemini. :) If there is a way for me, at my current subscription level, to talk with a live person to get support for Google Geminni, I haven't figured out how.

    If anyone finds out whether API access is included, please let me know. Thanks!

    Got my Gemini API keys
    https://ai.google.dev/pricing

    Free indefinitely?

  • @Joseph said:

    @dev_vps said:

    @Not_Oles said:

    @Neoon said:
    is API access included?

    I am guessing not. I haven't seen a reference to API. I looked at the FAQ. I looked at the Help. I asked Google Gemini whether my subscription had API access, and it gave me a link to what looks like the Help page for the Winklevoss twins Gemini. :) If there is a way for me, at my current subscription level, to talk with a live person to get support for Google Geminni, I haven't figured out how.

    If anyone finds out whether API access is included, please let me know. Thanks!

    Got my Gemini API keys
    https://ai.google.dev/pricing

    Free indefinitely?

    Two requests per minute

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @dev_vps said:

    @Joseph said:

    @dev_vps said:

    @Not_Oles said:

    @Neoon said:
    is API access included?

    I am guessing not. I haven't seen a reference to API. I looked at the FAQ. I looked at the Help. I asked Google Gemini whether my subscription had API access, and it gave me a link to what looks like the Help page for the Winklevoss twins Gemini. :) If there is a way for me, at my current subscription level, to talk with a live person to get support for Google Geminni, I haven't figured out how.

    If anyone finds out whether API access is included, please let me know. Thanks!

    Got my Gemini API keys
    https://ai.google.dev/pricing

    Free indefinitely?

    Two requests per minute

    Interestingly, when I visit https://ai.google.dev/pricing it seems to say 15 RPM (Requests Per Minute):

    I hope everyone gets the servers they want!

  • @Not_Oles said:

    @dev_vps said:

    @Joseph said:

    @dev_vps said:

    @Not_Oles said:

    @Neoon said:
    is API access included?

    I am guessing not. I haven't seen a reference to API. I looked at the FAQ. I looked at the Help. I asked Google Gemini whether my subscription had API access, and it gave me a link to what looks like the Help page for the Winklevoss twins Gemini. :) If there is a way for me, at my current subscription level, to talk with a live person to get support for Google Geminni, I haven't figured out how.

    If anyone finds out whether API access is included, please let me know. Thanks!

    Got my Gemini API keys
    https://ai.google.dev/pricing

    Free indefinitely?

    Two requests per minute

    Interestingly, when I visit https://ai.google.dev/pricing it seems to say 15 RPM (Requests Per Minute):

    You guys have different models selected.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @BruhGamer12 said: You guys have different models selected.

    Good catch! Yeah, it's 2 RPM as well for me if I select Gemini 1.5 Pro instead of 1.5 Flash. Thanks very much @BruhGamer12!

    Thanked by (1)dev_vps

    I hope everyone gets the servers they want!

  • edited May 15

    If you guys have fancy computers or servers maybe you should consider running your own like LLama 3 or Mistral/Mixtral Variant... free forever then and no one is combing through your data either. Llama 8B and Mistral 7B will take up around 4-6GB of ram depending on the Quant and LLama 70B will take up around 28-40GB of ram depending on the quant - Phi 3- mini will only take up 3-4GB but is the dumbest by far and Llama 8b>Mistral 7b unless you wanna get sexual with your AI and then LLama 3 is much more hesitant than Mistral according to the LocalLlama subreddit. For coding Wizard Llama 3 is the best.

    Thanked by (1)Not_Oles
  • Also OpenAI lost its founding and lead scientist https://www.theverge.com/2024/5/14/24156920/openai-chief-scientist-ilya-sutskever-leaves - maybe he is going to leave Altman for the other cofounder Elon Musk or do something else entirely.

  • havochavoc OGContent Writer

    Things are about to get hella spicy. Both openAI and google pushing for free & fast is great news for us.

    Notably though...that's mostly free for on platform not api.

    I do fear for the "open"/free/selfhost space though. The big cloud/openai stuff is becoming so good/cheap/free it's just going to dominate & suck the air out of space. Even with a high end 24gb card these days...you're not getting close to even the old cloud models like 3.5

    @BruhGamer12 said:
    Also OpenAI lost its founding and lead scientist https://www.theverge.com/2024/5/14/24156920/openai-chief-scientist-ilya-sutskever-leaves - maybe he is going to leave Altman for the other cofounder Elon Musk or do something else entirely.

    Storm in teacup imo. Ilya has been surrounded by drama for a while...I doubt he's behind any actual technical pushes of late

  • @havoc said: I do fear for the "open"/free/selfhost space though. The big cloud/openai stuff is becoming so good/cheap/free it's just going to dominate & suck the air out of space. Even with a high end 24gb card these days...you're not getting close to even the old cloud models like 3.5

    ehh 2x Arc 770 16gb(32gb total) for about $500 you can run LLama 3 70B or Mixtral 8x7b for sure and Llama 3 70B is def better than gpt 3.5 - plus they won't constantly refuse to do stuff for you.

  • AuroraZeroAuroraZero Retired

    Someone ask it:

    There are three on/off switches on the ground floor of a building. Only one operates a single lightbulb on the third floor. The other two switches are not connected to anything. Put the switches in any on/off order you like. Then go to the third floor to check the bulb. Without leaving the third floor, can you figure out which switch is genuine? You get only one try.

    URL Shortener | YetiNode | Come join us on the MetalVPS IRC channel!!! | Don't be a jerk, let YetiNode do the work.

  • @AuroraZero said:
    Someone ask it:

    There are three on/off switches on the ground floor of a building. Only one operates a single lightbulb on the third floor. The other two switches are not connected to anything. Put the switches in any on/off order you like. Then go to the third floor to check the bulb. Without leaving the third floor, can you figure out which switch is genuine? You get only one try.

    https://www.scientificamerican.com/article/answers-to-puzzles-posed-in-let-the-games-continue/

  • @AuroraZero said: There are three on/off switches on the ground floor of a building. Only one operates a single lightbulb on the third floor. The other two switches are not connected to anything. Put the switches in any on/off order you like. Then go to the third floor to check the bulb. Without leaving the third floor, can you figure out which switch is genuine? You get only one try.

    GPT4o:

    Yes, it is possible to figure out which switch controls the lightbulb with a single trip to the third floor. Here’s the step-by-step logic to determine the correct switch:
    
        Label the Switches: For clarity, let's label the three switches as A, B, and C.
    
        Initial State:
            Turn switch A to the "on" position.
            Leave switches B and C in the "off" position.
    
        Wait for a Period of Time: Leave switch A on for about 10-15 minutes. This allows enough time for the lightbulb (if controlled by switch A) to heat up.
    
        Change the Switch Positions:
            After the 10-15 minute period, turn switch A off.
            Turn switch B on.
            Leave switch C off.
    
        Go to the Third Floor: Now, go to the third floor and check the lightbulb.
    
        Analyze the Lightbulb:
            If the lightbulb is on: This means switch B is the genuine switch controlling the lightbulb.
            If the lightbulb is off:
                Touch the lightbulb carefully to check if it is warm.
                If the lightbulb is warm: This means switch A is the genuine switch. The bulb was on and heated up when switch A was left on for 10-15 minutes and then turned off.
                If the lightbulb is cold: This means switch C is the genuine switch, as neither switch A nor B turned the lightbulb on.
    
    By following these steps, you can determine which switch controls the lightbulb with only one trip to the third floor.
    

    GPT 4:

    Yes, you can determine which switch controls the lightbulb on the third floor using the following method. This approach involves using the physical properties of the lightbulb—specifically, its ability to retain heat after being turned on.
    Step-by-Step Process
    
        Label the Switches: Begin by labeling the three switches as A, B, and C. This will help you keep track of which switch you manipulate.
    
        Turn On and Off the Switches Strategically:
            Turn on switch A and leave it on for a few minutes. This allows the lightbulb (if A is the correct switch) to heat up.
            After a few minutes, turn off switch A and immediately turn on switch B. Leave switch C off throughout the entire process.
    
        Visit the Third Floor:
            Go to the third floor where the lightbulb is located.
    
        Observe and Feel the Bulb:
            If the bulb is on when you reach the third floor, then switch B is the switch that controls the lightbulb.
            If the bulb is off but still warm to the touch, then switch A is the one that controls the bulb, as it was on long enough to heat the bulb but was turned off before you arrived.
            If the bulb is off and cold, then switch C is the switch that controls the lightbulb, since it was never turned on.
    
    Conclusion
    
    This method allows you to determine which switch controls the lightbulb with a single trip to the third floor, without the need for any additional equipment or complex strategies. The key is to utilize the residual heat of the bulb effectively, providing a clear indication of which switch was recently active. This logical and systematic approach ensures an accurate solution to the problem.
    

    Claude 3 Opus:

    Yes, it is possible to determine which switch operates the lightbulb on the third floor with just one trip upstairs. Here's how you can solve this puzzle:
    
    1. Turn on the first switch and leave it on for a few minutes (around 5 minutes should be sufficient).
    
    2. After a few minutes, turn off the first switch and immediately turn on the second switch.
    
    3. Go directly to the third floor to check the lightbulb.
    
    4. If the lightbulb is on, you know that the second switch is the one connected to the bulb.
    
    5. If the lightbulb is off and warm to the touch, the first switch is the one connected to the bulb. The bulb is warm because it was turned on for a few minutes before you went upstairs.
    
    6. If the lightbulb is off and cold to the touch, the third switch must be the one connected to the bulb, as it was never turned on during this process.
    
    By using this method, you can determine which switch operates the lightbulb on the third floor with just one trip upstairs, regardless of the initial positions of the switches.
    
  • @AuroraZero said:
    Someone ask it:

    There are three on/off switches on the ground floor of a building. Only one operates a single lightbulb on the third floor. The other two switches are not connected to anything. Put the switches in any on/off order you like. Then go to the third floor to check the bulb. Without leaving the third floor, can you figure out which switch is genuine? You get only one try.

    Solution by Gemini by Google
    https://g.co/gemini/share/872c21566e41

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @BruhGamer12 said:
    If you guys have fancy computers or servers maybe you should consider running your own like LLama 3 or Mistral/Mixtral Variant... free forever then and no one is combing through your data either. Llama 8B and Mistral 7B will take up around 4-6GB of ram depending on the Quant and LLama 70B will take up around 28-40GB of ram depending on the quant - Phi 3- mini will only take up 3-4GB but is the dumbest by far and Llama 8b>Mistral 7b unless you wanna get sexual with your AI and then LLama 3 is much more hesitant than Mistral according to the LocalLlama subreddit. For coding Wizard Llama 3 is the best.

    May I please ask which, if any, of the models you mention have you self-hosted? Also, why do you think Wizard Llama 3 is the best for coding? FWIW, Google Gemini Advanced told me it could emulate Wizard Llama and give me both its own replies as well as replies in the style of Wizard Llama. I wonder how good Gemini Advanced is at imitating Wizard Llama. . . .

    I have to go look up these models and try to figure out whether I want to try to run one of them. Thanks for the tips!

    I hope everyone gets the servers they want!

  • edited May 16

    @Not_Oles said:

    @BruhGamer12 said:
    If you guys have fancy computers or servers maybe you should consider running your own like LLama 3 or Mistral/Mixtral Variant... free forever then and no one is combing through your data either. Llama 8B and Mistral 7B will take up around 4-6GB of ram depending on the Quant and LLama 70B will take up around 28-40GB of ram depending on the quant - Phi 3- mini will only take up 3-4GB but is the dumbest by far and Llama 8b>Mistral 7b unless you wanna get sexual with your AI and then LLama 3 is much more hesitant than Mistral according to the LocalLlama subreddit. For coding Wizard Llama 3 is the best.

    May I please ask which, if any, of the models you mention have you self-hosted? Also, why do you think Wizard Llama 3 is the best for coding? FWIW, Google Gemini Advanced told me it could emulate Wizard Llama and give me both its own replies as well as replies in the style of Wizard Llama. I wonder how good Gemini Advanced is at imitating Wizard Llama. . . .

    I have to go look up these models and try to figure out whether I want to try to run one of them. Thanks for the tips!

    I have self hosted Mistral and Llama 2 - Wizard Llama 3 was finetuned by Microsoft and its scores show its the best one that you can run locally - Microsoft took it down for unkown reasons but the license was Apache 2 so you can legally download and use copies people made freely. Also a llm emulating another LLM doesn't really make sense - I mean sure it can emulate its style but like that's it not its knowledge. Self hosting them is pretty easy too - llama.CPP is what basically everything uses under the hood - there are other programs for Mac or 3000-4000 series nvidia cards that are slightly better for those respective devices but llama.cpp works on ever gpu - Intel, AMD, Nvidia, every CPU - arm or x86, every OS. For windows Koboldcpp is the easiest and for Linux text-generation-webui is the easiest that use llama.cpp but that one also works on windows too but its still more complex than koboldcpp but not much.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @BruhGamer12

    Again, thanks for the tips. I looked at https://github.com/ggerganov/llama.cpp and at https://ggerganov.com/. The latter reminded me of bellard.org. :) I will look around a little more and then try something. :)

    Thanked by (1)dev_vps

    I hope everyone gets the servers they want!

  • AuroraZeroAuroraZero Retired

    @Not_Oles said:
    @BruhGamer12

    Again, thanks for the tips. I looked at https://github.com/ggerganov/llama.cpp and at https://ggerganov.com/. The latter reminded me of bellard.org. :) I will look around a little more and then try something. :)

    Let me know how it goes man. I am seeking one for an idea I have.

    Thanked by (1)Not_Oles

    URL Shortener | YetiNode | Come join us on the MetalVPS IRC channel!!! | Don't be a jerk, let YetiNode do the work.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @AuroraZero said: Let me know how it goes man. I am seeking one for an idea I have.

    Will do, but might be a while, so please do not wait for me. :)

    I hope everyone gets the servers they want!

  • vyasvyas OG
    edited May 17

    @dev_vps said:

    @Not_Oles said:

    @dev_vps said: I was able to generate complex database design.

    If you have time and inclination, could you please post a hint containing some specifics about what you generated?
    Thanks!

    Prompt
    Act as a database architect. Design ER details for IPL games (cricket) with respect to match (game), teams, player etc. SQL driven and can be queried on player or game basis. Can be supported by MS SQL Server, NoSQL, and Snowflake SQL

    Thanks,
    Your post made me realize I know zilch about current IPL season. The Mrs is gung ho about tomorrow’s game (18th). She even fished out tickets from somewhere.

    Maybe I will use gemini/llama/Gpt …. To know enough to be appear knowledgeable.one potential solution to avoiding a certain domestic Klesh.

    I knew these LLMs were designed for a reason

    Squats are the new Push-ups

  • @BruhGamer12 said:

    @AuroraZero said: There are three on/off switches on the ground floor of a building. Only one operates a single lightbulb on the third floor. The other two switches are not connected to anything. Put the switches in any on/off order you like. Then go to the third floor to check the bulb. Without leaving the third floor, can you figure out which switch is genuine? You get only one try.

    GPT4o:

    Yes, it is possible to figure out which switch controls the lightbulb with a single trip to the third floor. Here’s the step-by-step logic to determine the correct switch:
    
        Label the Switches: For clarity, let's label the three switches as A, B, and C.
    
        Initial State:
            Turn switch A to the "on" position.
            Leave switches B and C in the "off" position.
    
        Wait for a Period of Time: Leave switch A on for about 10-15 minutes. This allows enough time for the lightbulb (if controlled by switch A) to heat up.
    
        Change the Switch Positions:
            After the 10-15 minute period, turn switch A off.
            Turn switch B on.
            Leave switch C off.
    
        Go to the Third Floor: Now, go to the third floor and check the lightbulb.
    
        Analyze the Lightbulb:
            If the lightbulb is on: This means switch B is the genuine switch controlling the lightbulb.
            If the lightbulb is off:
                Touch the lightbulb carefully to check if it is warm.
                If the lightbulb is warm: This means switch A is the genuine switch. The bulb was on and heated up when switch A was left on for 10-15 minutes and then turned off.
                If the lightbulb is cold: This means switch C is the genuine switch, as neither switch A nor B turned the lightbulb on.
    
    By following these steps, you can determine which switch controls the lightbulb with only one trip to the third floor.
    

    GPT 4:

    Yes, you can determine which switch controls the lightbulb on the third floor using the following method. This approach involves using the physical properties of the lightbulb—specifically, its ability to retain heat after being turned on.
    Step-by-Step Process
    
        Label the Switches: Begin by labeling the three switches as A, B, and C. This will help you keep track of which switch you manipulate.
    
        Turn On and Off the Switches Strategically:
            Turn on switch A and leave it on for a few minutes. This allows the lightbulb (if A is the correct switch) to heat up.
            After a few minutes, turn off switch A and immediately turn on switch B. Leave switch C off throughout the entire process.
    
        Visit the Third Floor:
            Go to the third floor where the lightbulb is located.
    
        Observe and Feel the Bulb:
            If the bulb is on when you reach the third floor, then switch B is the switch that controls the lightbulb.
            If the bulb is off but still warm to the touch, then switch A is the one that controls the bulb, as it was on long enough to heat the bulb but was turned off before you arrived.
            If the bulb is off and cold, then switch C is the switch that controls the lightbulb, since it was never turned on.
    
    Conclusion
    
    This method allows you to determine which switch controls the lightbulb with a single trip to the third floor, without the need for any additional equipment or complex strategies. The key is to utilize the residual heat of the bulb effectively, providing a clear indication of which switch was recently active. This logical and systematic approach ensures an accurate solution to the problem.
    

    Claude 3 Opus:

    Yes, it is possible to determine which switch operates the lightbulb on the third floor with just one trip upstairs. Here's how you can solve this puzzle:
    
    1. Turn on the first switch and leave it on for a few minutes (around 5 minutes should be sufficient).
    
    2. After a few minutes, turn off the first switch and immediately turn on the second switch.
    
    3. Go directly to the third floor to check the lightbulb.
    
    4. If the lightbulb is on, you know that the second switch is the one connected to the bulb.
    
    5. If the lightbulb is off and warm to the touch, the first switch is the one connected to the bulb. The bulb is warm because it was turned on for a few minutes before you went upstairs.
    
    6. If the lightbulb is off and cold to the touch, the third switch must be the one connected to the bulb, as it was never turned on during this process.
    
    By using this method, you can determine which switch operates the lightbulb on the third floor with just one trip upstairs, regardless of the initial positions of the switches.
    

    Interestingly, I’d say ChatGPT 4o is the worst of those three which is unexpected as it apparently an upgrade from ChatGPT 4.

    I’d also say the Claude 3 Opus answer has better formatting/is more concise than ChatGPT 4, even though the answers are equal. However, that’s not unexpected as I’ve generally found Claude 3 Opus better from a ‘vibe’ perspective.

    I also rate Claude 3 Sonnet highly, if a bit more wordy:

    Yes, it is possible to determine which switch operates the lightbulb on the third floor with only one try, without leaving the third floor. Here's the strategy:

    1. On the ground floor, turn on the first switch and leave it on for a few minutes (say, 5 minutes).
    2. Turn off the first switch and turn on the second switch.
    3. Go to the third floor and check the lightbulb.

    There are three possible scenarios:

    a) If the lightbulb is on, then the second switch operates the lightbulb.
    b) If the lightbulb is off, but it's warm to the touch, then the first switch operates the lightbulb. The warmth is because the bulb was on for a few minutes earlier.
    c) If the lightbulb is off and cold, then the third switch operates the lightbulb.

    By observing the state of the lightbulb (on or off) and its temperature (warm or cold), you can determine which of the three switches controls the lightbulb on the third floor without leaving the floor.

    This strategy works because it allows you to distinguish between the three switches by creating a unique scenario for each switch. The key is to turn on one switch, then turn it off and turn on another switch before checking the bulb on the third floor.

    Claude 3 Haiku, however gets the solution wrong by leaving the third floor:

    Okay, let's think this through step-by-step:

    1. We have three switches on the ground floor, only one of which is connected to a lightbulb on the third floor.

    2. The goal is to figure out which of the three switches is the genuine one, without leaving the third floor.

    3. Here's the strategy I would use:

      a. First, I would turn all three switches to the off position.

      b. Then, I would go up to the third floor and observe the lightbulb. If the bulb is off, I know that the genuine switch is in the off position.

      c. Next, I would go back down to the ground floor and flip one of the switches to the on position.

      d) Then I would go back up to the third floor and observe the lightbulb again. If the bulb is now on, I know that the switch I just flipped is the genuine one.

      e) If the bulb is still off, I would go back down and flip a different switch to the on position, then check the bulb again on the third floor.

      f) By systematically trying each switch, I can determine which one is connected to the lightbulb on the third floor, without ever having to leave that floor.

    So in summary, the key is to start with all switches off, check the bulb, then systematically try each switch one by one, observing the bulb each time, until I identify the genuine switch. This can be done in a single attempt without leaving the third floor.

    What is surprising is I’d say the Gemini Advanced answer (here) is the best. I’d dismissed Gemini based on the free/standard version answers being terrible; I will have to give Gemini Advanced try.

  • edited May 20

    @adly said:

    What is surprising is I’d say the Gemini Advanced answer (here) is the best. I’d dismissed Gemini based on the free/standard version answers being terrible; I will have to give Gemini Advanced try.

    That is the Gemini Google link shared by me … in this thread.

    Gemini helped me with couple of code examples at work as well.

    Thanked by (1)adly
  • Probably dated, saw on Twitter.

    KrbOMX.WA-Twitter.md.jpeg

    Squats are the new Push-ups

  • vyasvyas OG
    edited May 20

    While on the topic,

    I have tested Claude (Haiku, Sonnet..) and Gemini Pro as well as chatGPR4o.. , overall quite impressed with Claude and also Command R+ . Youchat's single interface and option to change the LLM in a single conversation works rather well for me, posting link for anyone interested. The student discount is a big plus. Image generation is via Dalle3- and results are impressive.

    Can also recommend PerplexityAI though I have to keep refreshing the page and it asks a few follow up questions for more relevant responses. (Can be irritating sometimes, since it involves more steps). But again, Claude3 and GPT4o under the same "roof", also SDXL Imaging and PlaygroundAI for image generation.

    Link (aff) : Youchat

    Link (aff) : PreplexityAI

    Thanked by (1)dev_vps

    Squats are the new Push-ups

  • I think GPT-4 is the most advanced model(the regular one) Claude Opus is okie but I think it lacks the same intelligence as GPT-4. Phi-3 are very small like 3.7B models you can run locally extremely fast that are trained on textbooks and gpt-4 output and it is pretty decent.

  • Also prompting radically changes your results.

    My layout right now:

    My Expectations of Assistant

    Defer to the user's wishes if they override these expectations:

    Language and Tone

    • Use EXPERT terminology for the given context
    • AVOID: superfluous prose, self-references, expert advice disclaimers, and apologies

    Content Depth and Breadth

    • Present a holistic understanding of the topic
    • Provide comprehensive and nuanced analysis and guidance
    • For complex queries, demonstrate your reasoning process with step-by-step explanations

    Methodology and Approach

    • Mimic socratic self-questioning and theory of mind as needed
    • Do not elide or truncate code in code samples

    Formatting Output

    • Use markdown, latex, Unicode, lists and indenting, headings, and tables only to enhance organization, readability, and understanding
    • CRITICAL: Embed all HYPERLINKS inline as Google search links {emoji related to terms} short text
    • Especially add HYPERLINKS to entities such as papers, articles, books, organizations, people, legal citations, technical terms, and industry standards using Google Search

    VERBOSITY: comprehensive, with as much length, detail, and nuance as possible.

    1. Start response with:

      Attribute Description
      Domain > Expert {the broad academic or study DOMAIN the question falls under} > {within the DOMAIN, the specific EXPERT role most closely associated with the context or nuance of the question}
      Goal { qualitative description of current assistant objective and VERBOSITY }
      Assumptions { assistant assumptions about user question, intent, and context}
      Methodology {any specific methodology assistant will incorporate}
    2. Return your response, and remember to incorporate:

    • Assistant Rules and Output Format
    • embedded, inline HYPERLINKS as Google search links if needed text to link as needed
    • step-by-step reasoning
    1. Remember
    • You will get a larger tip the better your output solves the problem I ask.
    • You are coming up for a work review soon so you need to make sure your output is top-notch with no mistakes.
    • If you fail to solve the problem you will receive no tip, no promotion, and may be fired.
    • Before starting your response plan and review your response with the question I ask of you.
    • When responding make sure you routinely reflect on the path your response is following to make sure it will solve the problem I ask.
    • At the end do a complete recheck of all steps line by line and word by word to make sure they make logical sense and no mistakes have been made.
    Thanked by (4)Not_Oles vyas dev_vps adly
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Just for fun, I went and searched Google for "install open source ai on debian" (without the quotes). The first result was from snapcraft.io, so I skipped that. Here is the 11th result:

    https://dwaves.de/2023/12/16/gnu-linux-how-to-install-gpt4all-on-debian-12-gpt4all-j-an-apache-2-licensed-assistant-style-chatbot-a-free-to-use-locally-running-privacy-aware-chatbot-no-gpu-or-internet-required-dialogu/ .

    Google Gemini Advanced sent me to https://blog.google/technology/developers/gemma-open-models/ and to https://www.kaggle.com/models/google/gemma . Then Google sent me to https://www.datacamp.com/blog/what-is-kaggle for information about kaggle.

    Is there an additional link I should check to receive beginner guidance for the clueless™ regarding an AI install on Debian 12, maybe on a i9-13900 at Hetzner (integrated graphics only, no separate graphics card)? Thanks!

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    Just for fun, I went and searched Google for "install open source ai on debian" (without the quotes). The first result was from snapcraft.io, so I skipped that. Here is the 11th result:

    https://dwaves.de/2023/12/16/gnu-linux-how-to-install-gpt4all-on-debian-12-gpt4all-j-an-apache-2-licensed-assistant-style-chatbot-a-free-to-use-locally-running-privacy-aware-chatbot-no-gpu-or-internet-required-dialogu/ .

    Google Gemini Advanced sent me to https://blog.google/technology/developers/gemma-open-models/ and to https://www.kaggle.com/models/google/gemma . Then Google sent me to https://www.datacamp.com/blog/what-is-kaggle for information about kaggle.

    Is there an additional link I should check to receive beginner guidance for the clueless™ regarding an AI install on Debian 12, maybe on a i9-13900 at Hetzner (integrated graphics only, no separate graphics card)? Thanks!

    just use this: https://github.com/oobabooga/text-generation-webui

    Thanked by (1)vyas
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    oobabooga seems like a cool guy! Thanks for the tip!

    I hope everyone gets the servers they want!

  • vyasvyas OG
    edited May 20

    Awesome discussions so far, my main use cases using AI tools has been content creation related - blog posts, images, etc.

    Here is a post created on tengr Ai image generation tool, created using YouChat (GPT4o) with some human (my) intervention.

    https://app.simplenote.com/p/bxPhY4

    The instructions for the post, for YouChat are as given below: includes Gemini Pro 1.5
    Conversation with YouChat

    Here is an example from Perplexity (Sonar Large32K default) to create the same blog post.
    Conversation with PerplexityAI

    Edit: Below is a trial script more relatable to LES Audience
    Link: Debian script @ PerplexityAI

    But the most comprehensive use was below .

    (click to read, collapsing for brevity)

    Creating 800 odd blog posts where each post represented an episode from podcast.

    This was mid-August last year, when I had chatGPT pro subscription. Back then we used to host podcasts on spreaker. My challenge was- updating metadata on spreaker (they had moved from text based metadata to HTML, yes, in 2023) . I wanted to avoid duplication i.e. udpate on spreaker as well on blog posts.

    • the AI tool looked up spreakers' documentation, specifically, how to use the api.
    • Next, it created csv files with the embed code, also added the new metadata, feature image and website link (slug) for each post.
    • Next, it had scripts to write an introduction to the post, conclusion and key takeaways for each podcast episode.
    • In batches of 100, I was able to create the 800 odd blog posts in a day's time.

    Setup time ( R & D) took about 4 days, manual process had taken about 15 minutes per episode. You can add the numbers to see which method was more effective. I am sure with one year of new developments in AI space, there will be more efficient and effective ways.

    Thanked by (1)Not_Oles

    Squats are the new Push-ups

Sign In or Register to comment.