Wow, things are heating up on this latest Meta model. I was looking to turn things up a notch on my local LLM workflow which I finally got working and gotten usable responses using the llama3:latest model from Ollama. For some reason, llama3:text was giving me some loopy