Taking a fun course with Lonely Octopus, I’ve been learning how to use pandas to clean data for analysis, and also how to quickly build a proof of concept/MVP using Streamlit.
Installing Streamlit locally on Windows in Gitbash threw an error:
$ pip install streamlit
WARNING: Failed to write executable - trying to use .deleteme logic
ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: 'C:\Python311\Scripts\watn311\Scripts\watchmedo.exe.deleteme'
“Watchmedo”? Sounded like malware. I got scared and shut off my wifi for a sec. Then I calmed down and decided to run it in a venv instead. Created the venv:
$ python -m venv myenv
Then activate it (I’m using Gitbash for my shell):
$ source myenv/Scripts/activate
(or source myenv/bin/activate
)
Then try again to install Streamlit and check if it installed properly:
$ pip install streamlit
$ streamlit --version
Streamlit, version 1.36.0
Now the moment of truth — run the little app:
$ streamlit run app.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
It might also require installing OpenAI, so don’t forget to do that, too. BUT…
Running LM Studio is something I’m getting a lot more used to now. I’ve been playing with it and AnythingLLM for local document RAG chats.
So you don’t need to call OpenAI’s API — you can point your app at your local LM Studio server!
You have to grab the example code from inside LM Studio under “chat (python)”:
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
Paste that into your Streamlit app.py file, replacing the ‘client’ variable. Make sure the model is loaded up, and that the server is running.
There are tons of settings to consider inside LM Studio. You also need to have enough memory to run the models! LM Studio’s Discord server is a good place to learn more.