AI Command Line Interface With Standard Input, Standard Output, Standard Error, And Pipes?

Not_OlesNot_Oles Hosting ProviderContent Writer
edited August 2 in Help

I've been looking for an open source AI model command line interface that works with standard input, standard output, standard error, and pipes. It's okay if the model is running externally (for example Google Gemini), but a locally running model would be interesting too. Any hints for me? Thanks!

I hope everyone gets the servers they want!

Tagged:

Comments

  • havochavoc OGContent Writer
    edited August 2

    llama.cpp is probably where I'd start. No idea whether it supports pipes etc though

    Be sure to install the right cuda/vulkan/rocm/whatever stack you need for your hardware else it'll be slow

    There is a llama guide somewhere in my submission history

    Thanked by (1)Not_Oles
  • https://github.com/simonmysun/ell

    this match your description?

    Thanked by (1)Not_Oles

    youtube.com/watch?v=k1BneeJTDcU

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Otus9051 said:
    https://github.com/simonmysun/ell

    this match your description?

    Not sure. Have to check. Thanks for the tip! I didn't know about ell.

    Thanked by (1)Otus9051

    I hope everyone gets the servers they want!

  • @Not_Oles curious on what you ended up doing if you’re open to sharing. I had the same question.

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited August 18

    @huntercop

    Haha, what I ended up doing is not so much . . . yet. :)

    I looked at the simonmysun/ell repo and really liked what I saw in the README.md. I haven't looked at the code yet, and I haven't installed it yet. The new Chrome window I opened on August 2 still persists. Not that it matters, but it's one of eleven windows my Chromebook has open at the moment.

    Part of the reason I haven't done much is that I already had installed and have continued to use eliben/gemini-cli, which works great with Google Gemini even though eliben/gemini-cli doesn't have full support for standard I/O/E and pipes. Eli was kind enough to add a $load <file path> command when I emailed him about needing multiline input for chats. Now that gemini-cli supports multi-line input via a file, gemini-cli accomplishes what I have really needed so far.

    I still plan to try simonmysun/ell. And, if anybody else can suggest another CLI interface to AI models which also has full support for standard I/O/E and pipes, that would be great. I'm looking forward to full CLI support and also to trying additional models.

    Thanks for asking! Please let us know what you end up doing and how it works. :) Thanks again!

    Thanked by (1)DrNutella

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    @huntercop

    Haha, what I ended up doing is not so much . . . yet. :)

    I looked at the simonmysun/ell repo and really liked what I saw in the README.md. I haven't looked at the code yet, and I haven't installed it yet. The new Chrome window I opened on August 2 still persists. Not that it matters, but it's one of eleven windows my Chromebook has open at the moment.

    Part of the reason I haven't done much is that I already had installed and have continued to use eliben/gemini-cli, which works great with Google Gemini even though eliben/gemini-cli doesn't have full support for standard I/O/E and pipes. Eli was kind enough to add a $load <file path> command when I emailed him about needing multiline input for chats. Now that gemini-cli supports multi-line input via a file, gemini-cli accomplishes what I have really needed so far.

    I still plan to try simonmysun/ell. And, if anybody else can suggest another CLI interface to AI models which also has full support for standard I/O/E and pipes, that would be great. I'm looking forward to full CLI support and also to trying additional models.

    Thanks for asking! Please let us know what you end up doing and how it works. :) Thanks again!

    Oh I wish I had the time right now, my time is already preoccupied by many others eggs in my daily basket.

  • Well, tbh I don't know if it will be useful to you but ollama let's run model locally on machine through CLI, do check it out once. 😀

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cainyxues said:
    Well, tbh I don't know if it will be useful to you but ollama let's run model locally on machine through CLI, do check it out once. 😀

    Thanks @cainyxues!

    Some links for the curious:

    https://ollama.com/

    https://github.com/ollama/ollama

    From https://www.doprax.com/tutorial/a-step-by-step-guide-for-installing-and-running-ollama-and-openwebui-locally-part-1/ :

    In a other word, It is actually a command-line application, so you can interact with it in the terminal directly. Open the terminal and type this command:

    ollama

    It outputs a list of these commands:

    Usage:
      ollama [flags]
      ollama [command]
    
    Available Commands:
      serve       Start ollama
      create      Create a model from a Modelfile
      show        Show information for a model
      run         Run a model
      pull        Pull a model from a registry
      push        Push a model to a registry
      list        List models
      cp          Copy a model
      rm          Remove a model
      help        Help about any command
    
    Flags:
      -h, --help      help for ollama
      -v, --version   Show version information
    
    Use "ollama [command] --help" for more information about a command.
    

    FWIW, in my quick check, I didn't see mention of standard input, output, error, and pipes.

    I hope everyone gets the servers they want!

Sign In or Register to comment.