This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
ollama_llm [2024/04/24 17:19] jpummil |
ollama_llm [2024/04/27 02:59] (current) jpummil |
||
---|---|---|---|
Line 2: | Line 2: | ||
Ollama is an open source inference engine for running LLM models. [[https:// | Ollama is an open source inference engine for running LLM models. [[https:// | ||
- | To run a local LLM, you need two ingredients: | + | To run a local LLM, you need two ingredients: |
- | AHPCC is making this tool available to our users in a limited fashion so as not to impact the broad array of research tasks making their way thru the queues. As such, Ollama is only available | + | AHPCC is making this tool available to our users in a limited fashion so as not to impact the broad array of research tasks making their way thru the queues. As such, Ollama is available and can be accessed |
+ | |||
+ | For a query to a model, you can do so via a login node as follows: | ||
< | < | ||
- | srun -p pcon06 -N1 -n1 -q comp -t 06:00:00 -C 4titanv --pty /bin/bash | + | ollama-ahpcc run < |
</ | </ | ||
- | Once you are logged in to your session, you need to determine which LLM model you wish to use for the session. To see the available models: | + | You need to determine which LLM model you wish to use for the query. More information about the various models can be seen here: https:// |
+ | To see the available models: | ||
< | < | ||
- | $ ollama list | + | $ ollama-ahpcc |
NAME | NAME | ||
all-minilm: | all-minilm: | ||
Line 28: | Line 32: | ||
</ | </ | ||
- | There are two ways to utilize Ollama. Interactively, | + | For querying with a single command |
- | + | ||
- | To use a session interactively... | + | |
< | < | ||
- | ollama | + | ollama-ahpcc |
- | </ | + | |
- | + | ||
- | When you are done with the session, exit with the command... | + | |
- | + | ||
- | < | + | |
- | /bye> | + | |
- | </ | + | |
- | + | ||
- | For querying with a single command from outside the Ollama shell... | + | |
- | + | ||
- | < | + | |
- | ollama | + | |
</ | </ | ||
Line 51: | Line 41: | ||
< | < | ||
- | ollama run llama3 How many states are in the USA? | + | ollama-ahpcc |
There are 50 states in the United States of America (USA). | There are 50 states in the United States of America (USA). | ||
Line 59: | Line 49: | ||
< | < | ||
- | ollama run llama3 Please list in JSON format the names, state capitols, and populations of all 50 US states | + | ollama-ahpcc |
Here is the list of 50 US states with their names, state capitals, and populations in JSON format: | Here is the list of 50 US states with their names, state capitals, and populations in JSON format: | ||
Line 87: | Line 77: | ||
< | < | ||
- | ollama run llama3 Please list all 50 US states >> states.txt | + | ollama-ahpcc |
</ | </ | ||
Line 93: | Line 83: | ||
< | < | ||
- | ollama run llama3 "$(cat nas.f90)" | + | ollama-ahpcc |
This is a Fortran code file, specifically a main program and subroutine for solving a partial differential | This is a Fortran code file, specifically a main program and subroutine for solving a partial differential | ||
Line 115: | Line 105: | ||
</ | </ | ||
- | As we are all still learning how to efficiently and properly utilize LLM's in our daily life, you may be better served looking on the Ollama GitHub site under " | + | Input size for Ollama is somewhat limited (perhaps 1000 lines?), so it may be necessary to pare down input using other Linux tools. For example, a protein database file with 2200 lines won't fit...but paring the data down to the first 1000 lines does... |
< | < | ||
+ | ollama-ahpcc run llama3 " | ||
+ | |||
+ | This appears to be a PDB (Protein Data Bank) file, which contains structural information about a protein or | ||
+ | protein complex. The file is formatted in a specific way and includes information such as: | ||
+ | |||
+ | * Atom coordinates (3D positions) for each atom in the protein | ||
+ | * Atom names and types (e.g. N, CA, C, O) | ||
+ | * Chain identifiers (e.g. A, B) indicating which chain a particular residue belongs to | ||
+ | </ | ||
+ | |||
+ | It should be noted that for analyzing images and other various files, it requires a multi-modal model like llava or bakllava and results seems to be a bit mixed. | ||
+ | |||
+ | As we are all still learning how to efficiently and properly utilize LLM's in our daily life, you may be better served looking on the Ollama GitHub site under " | ||
+ | |||
https:// | https:// | ||
+ | |||
https:// | https:// | ||
- | </ | + | |