StableLM: An Opensource large language model by Stability AI team
Prodramp
Another full open source large language model from Stability AI team with 3B and 7B parameters, trained on The Pile Datasets and fine tuned on 5 other conversational datasets - Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH
== Video Timeline == (00:00) Content Intro (00:35) Introducing StableLM (01:19) StableLM Model Intro (03:03) Quick Demo at Hugging Face Space (04:10) 3B and 7B Params Models (05:27) Model Training Info (08:30) Model Context Length (09:20) Coding Walkthrough (12:09) Conclusion
=== Resources ===
- https://github.com/Stability-AI/StableLM
- https://pile.eleuther.ai/
- https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat
- https://huggingface.co/stabilityai 3B Parameters LLM Model
- https://huggingface.co/stabilityai/stablelm-base-alpha-3b/tree/main
- https://huggingface.co/stabilityai/stablelm-tuned-alpha-3b/tree/main 7B Parameters LLM Model
- https://huggingface.co/stabilityai/stablelm-base-alpha-7b/tree/main
- https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b/tree/main
Please visit: https://prodramp.com | @prodramp https://www.linkedin.com/company/prodramp
Content Creator: Avkash Chauhan (@avkashchauhan) https://www.linkedin.com/in/avkashchauhan
Tags: #stablelm #stableai #finetunellm #openai #python #ai #langchain #chromadb ... https://www.youtube.com/watch?v=6RjV-ztIIkA
113139498 Bytes