Code completion - local

havochavoc OGContent Writer

In similar vein as my recent llama post, couple notes on vscode code completion similar to copilot.

Assumes nvidia GPU.

This one isn't going to be line by line, but should be enough to follow.

Install docker

This one you can hopefully manage on your owned, but basically this.

Nvidia container toolkit

Here

If you get an error about crio restart - don't worry about it, it's fine.

Confirm containers can see the GPU

Like so

Refact

Clone the refact repo

Run the docker command on the readme. Navigate to the GUI. If your GPU is not local you can tunnel the port via ssh like so:

ssh myserver -L localhost:8008:localhost:8008

This going to automatically load a small model. This take a while...longer than you'd think. You can also monitor progress by using "docker logs" against the container.

vscode

Install vscode, go to the extension settings and set the infurl to where you've got your docker. Probably this:

http://127.0.0.1:8008

...and you should be good to go.

You can also select other models on the GUI. e.g. codellama and the like.

Thanked by (3)huginn veren01 abtdw
Sign In or Register to comment.