id
stringlengths
14
16
text
stringlengths
31
3.14k
source
stringlengths
58
124
da02d52f8f8f-2
Code Understanding: If you want to understand how to use LLMs to query source code from github, you should read this page. Interacting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions. Extraction: Extract structured inf...
/content/https://python.langchain.com/en/latest/index.html
da02d52f8f8f-3
Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. Discord: Join us on our Discord to discuss all things LangChain! YouTube: A collection of the LangChain tutorials and videos. Production Suppo...
/content/https://python.langchain.com/en/latest/index.html
700e7a5a2edc-0
.rst .pdf LangChain Ecosystem Contents Groups Companies / Products LangChain Ecosystem# Guides for how other companies/products can be used with LangChain Groups# LangChain provides integration with many LLMs and systems: LLM Providers Chat Model Providers Text Embedding Model Providers Document Loader Integrations T...
/content/https://python.langchain.com/en/latest/ecosystem.html
82171e19819a-0
Search Error Please activate JavaScript to enable the search functionality. Ctrl+K By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/search.html
a3550559966f-0
.md .pdf Tracing Contents Tracing Walkthrough Changing Sessions Tracing# By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents. First, you should install tracing and set up your environment properly. You can use either a locally hosted...
/content/https://python.langchain.com/en/latest/tracing.html
a3550559966f-1
We can keep on exploring each of these nested traces in more detail. For example, here is the lowest level trace with the exact inputs/outputs to the LLM. Changing Sessions# To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session yo...
/content/https://python.langchain.com/en/latest/tracing.html
91d3a5254df7-0
.rst .pdf API References API References# All of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, and APIs in LangChain. Models Prompts Indexes Memory Chains Agents Utilities Experimental Modules previous Integrations next Models By Harrison Chase © Copyright 2023...
/content/https://python.langchain.com/en/latest/reference.html
fed1ddfa1af0-0
.md .pdf Deployments Contents Streamlit Gradio (on Hugging Face) Beam Vercel Digitalocean App Platform Google Cloud Run SteamShip Langchain-serve BentoML Databutton Deployments# So you’ve made a really cool chain - now what? How do you deploy it and make it easily sharable with the world? This section covers several ...
/content/https://python.langchain.com/en/latest/deployments.html
fed1ddfa1af0-1
This repo serves as a template for how deploy a LangChain with Beam. It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API. Vercel# A minimal example on how to run LangChain on Vercel using Flask. Digitalocean App Platform# A minimal example on how to deploy Lan...
/content/https://python.langchain.com/en/latest/deployments.html
fed1ddfa1af0-2
Databutton# These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include Chatbot interface with conversational memory...
/content/https://python.langchain.com/en/latest/deployments.html
2cf4f627b73f-0
.ipynb .pdf Model Comparison Model Comparison# Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way...
/content/https://python.langchain.com/en/latest/model_laboratory.html
2cf4f627b73f-1
Flamingos are pink. Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} Pink HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} pink prompt = PromptTemplate(template="What is the capital of {state...
/content/https://python.langchain.com/en/latest/model_laboratory.html
2cf4f627b73f-2
The capital of New York is Albany. HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} st john s from langchain import SelfAskWithSearchChain, SerpAPIWrapper open_ai_llm = OpenAI(temperature=0) search = SerpAPIWrapper() self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_c...
/content/https://python.langchain.com/en/latest/model_laboratory.html
2cf4f627b73f-3
OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reig...
/content/https://python.langchain.com/en/latest/model_laboratory.html
f4790f7d1be2-0
.md .pdf Glossary Contents Chain of Thought Prompting Action Plan Generation ReAct Prompting Self-ask Prompt Chaining Memetic Proxy Self Consistency Inception MemPrompt Glossary# This is a collection of terminology commonly used when developing LLM applications. It contains reference to external papers or sources whe...
/content/https://python.langchain.com/en/latest/glossary.html
f4790f7d1be2-1
PromptChainer Paper Language Model Cascades ICE Primer Book Socratic Models Memetic Proxy# Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher. Resources:...
/content/https://python.langchain.com/en/latest/glossary.html
53afbe900bf4-0
.rst .pdf LangChain Gallery Contents Open Source Misc. Colab Notebooks Proprietary LangChain Gallery# Lots of people have built some pretty awesome stuff with LangChain. This is a collection of our favorites. If you see any other demos that you think we should highlight, be sure to let us know! Open Source# HowDoI.ai...
/content/https://python.langchain.com/en/latest/gallery.html
53afbe900bf4-1
Google Folder Semantic Search Build a GitHub support bot with GPT3, LangChain, and Python. Talk With Wind Record sounds of anything (birds, wind, fire, train station) and chat with it. ChatGPT LangChain This simple application demonstrates a conversational agent implemented with OpenAI GPT-3.5 and LangChain. When neces...
/content/https://python.langchain.com/en/latest/gallery.html
53afbe900bf4-2
Tool Updates in Agents Agent improvements (6th Jan 2023) Conversational Agent with Tools (Langchain AGI) Langchain AGI (23rd Dec 2022) Proprietary# Daimon A chat-based AI personal assistant with long-term memory about you. Summarize any file with AI Summarize not only long docs, interview audio or video files quickly, ...
/content/https://python.langchain.com/en/latest/gallery.html
e7bebeed6bb8-0
.rst .pdf Indexes Indexes# Indexes refer to ways to structure documents so that LLMs can best interact with them. LangChain has a number of modules that help you load, structure, store, and retrieve documents. Docstore Text Splitter Document Loaders Vector Stores Retrievers Document Compressors Document Transformers pr...
/content/https://python.langchain.com/en/latest/reference/indexes.html
cc93ad1b54e9-0
.rst .pdf Agents Agents# Reference guide for Agents and associated abstractions. Agents Tools Agent Toolkits previous Memory next Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/reference/agents.html
12141d07c8ac-0
.md .pdf Integrations Integrations# Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use. Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package,...
/content/https://python.langchain.com/en/latest/reference/integrations.html
12141d07c8ac-1
CerebriumAI: Install requirements with pip install cerebrium Get a Cerebrium api key and either set it as an environment variable (CEREBRIUMAI_API_KEY) or pass it to the LLM constructor as cerebriumai_api_key. PromptLayer: Install requirements with pip install promptlayer (be sure to be on version 0.1.62 or higher) Get...
/content/https://python.langchain.com/en/latest/reference/integrations.html
12141d07c8ac-2
FAISS: Install requirements with pip install faiss for Python 3.7 and pip install faiss-cpu for Python 3.10+. MyScale Install requirements with pip install clickhouse-connect. For documentations, please refer to this document. Manifest: Install requirements with pip install manifest-ml (Note: this is only available in ...
/content/https://python.langchain.com/en/latest/reference/integrations.html
fc5427caa65f-0
.md .pdf Installation Contents Official Releases Installing from source Installation# Official Releases# LangChain is available on PyPi, so to it is easily installable with: pip install langchain That will install the bare minimum requirements of LangChain. A lot of the value of LangChain comes when integrating it wi...
/content/https://python.langchain.com/en/latest/reference/installation.html
bd58b76e4cf9-0
.rst .pdf Models Models# LangChain provides interfaces and integrations for a number of different types of models. LLMs Chat Models Embeddings previous API References next Chat Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/reference/models.html
d46340b4417e-0
.rst .pdf Prompts Prompts# The reference guides here all relate to objects for working with Prompts. PromptTemplates Example Selector Output Parsers previous How to serialize prompts next PromptTemplates By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/reference/prompts.html
291fe73ffa4f-0
.rst .pdf Memory Memory# pydantic model langchain.memory.ChatMessageHistory[source]# field messages: List[langchain.schema.BaseMessage] = []# add_ai_message(message: str) → None[source]# Add an AI message to the store add_user_message(message: str) → None[source]# Add a user message to the store clear() → None[source]#...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-1
Return history buffer. property buffer: Any# String buffer of memory. pydantic model langchain.memory.ConversationBufferWindowMemory[source]# Buffer for storing conversation memory. field ai_prefix: str = 'AI'# field human_prefix: str = 'Human'# field k: int = 5# load_memory_variables(inputs: Dict[str, Any]) → Dict[str...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-2
field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-3
history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX,...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-4
field entity_store: langchain.memory.entity.BaseEntityStore [Optional]# field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human kee...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-5
field human_prefix: str = 'Human'# field k: int = 3# field llm: langchain.schema.BaseLanguageModel [Required]# clear() → None[source]# Clear memory contents. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → Non...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-6
field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last l...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-7
history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX,...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-8
field human_prefix: str = 'Human'# field k: int = 2# field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]#
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-9
field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrati...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-10
Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-11
field llm: langchain.schema.BaseLanguageModel [Required]# field summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'># Number of previous utterances to include in the context. clear() → None[source]# Clear memory contents. get_current_entities(input_string: str) → List[str][...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-12
Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. property memory_variables: List[str]# Will always return list of memory variables. :meta private: pydantic model langchain.memory.ConversationSummaryBufferMemory[source]# B...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-13
Save context from this conversation to buffer. pydantic model langchain.memory.ConversationTokenBufferMemory[source]# Buffer for storing conversation memory. field ai_prefix: str = 'AI'# field human_prefix: str = 'Human'# field llm: langchain.schema.BaseLanguageModel [Required]# field max_token_limit: int = 2000# field...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-14
load_messages() → None[source]# Retrieve the messages from Cosmos messages: List[BaseMessage]# prepare_cosmos() → None[source]# Prepare the CosmosDB client. Use this function or the context manager to make sure your database is ready. upsert_messages(new_message: Optional[langchain.schema.BaseMessage] = None) → None[so...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-15
clear() → None[source]# Delete all entities from store. delete(key: str) → None[source]# Delete entity value from store. exists(key: str) → bool[source]# Check if entity exists in store. get(key: str, default: Optional[str] = None) → Optional[str][source]# Get entity value from store. set(key: str, value: Optional[str]...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-16
field memory: langchain.schema.BaseMemory [Required]# clear() → None[source]# Nothing to clear, got a memory like a vault. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Load memory variables from memory. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Nothing shou...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-17
Retrieve the messages from Redis class langchain.memory.RedisEntityStore(session_id: str = 'default', url: str = 'redis://localhost:6379/0', key_prefix: str = 'memory_store', ttl: Optional[int] = 86400, recall_ttl: Optional[int] = 259200, *args: Any, **kwargs: Any)[source]# Redis-backed Entity store. Entities get a TTL...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-18
field memories: Dict[str, Any] = {}# clear() → None[source]# Nothing to clear, got a memory like a vault. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Return key-value pairs given the text input to the chain. If None, return all memories save_context(inputs: Dict[str, Any], outputs: Dict[str,...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
291fe73ffa4f-19
Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. property memory_variables: List[str]# The list of keys emitted from the load_memory_variables method. previous Document Transformers next Agents By Harrison Chase ...
/content/https://python.langchain.com/en/latest/reference/modules/memory.html
d1da97e27d8e-0
.rst .pdf SearxNG Search Contents Quick Start Searching Engine Parameters Search Tips SearxNG Search# Utility for using SearxNG meta search API. SearxNG is a privacy-friendly free metasearch engine that aggregates results from multiple search engines and databases and supports the OpenSearch specification. More detai...
/content/https://python.langchain.com/en/latest/reference/modules/searx_search.html
d1da97e27d8e-1
In the following example we are using the engines and the language parameters: # assuming the searx host is set as above or exported as an env variable s = SearxSearchWrapper(engines=['google', 'bing'], language='es') Search Tips# Searx offers a special search syntax that can also be used instead of...
/content/https://python.langchain.com/en/latest/reference/modules/searx_search.html
d1da97e27d8e-2
SearxNG Search Syntax for more details. Notes This wrapper is based on the SearxNG fork searxng/searxng which is better maintained than the original Searx project and offers more features. Public searxNG instances often use a rate limiter for API usage, so you might want to use a self hosted instance and disable the ra...
/content/https://python.langchain.com/en/latest/reference/modules/searx_search.html
d1da97e27d8e-3
# note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) Validators disable_ssl_warnings » unsecure validate_params » all fields field aiosession: Optional[Any] = None# field cat...
/content/https://python.langchain.com/en/latest/reference/modules/searx_search.html
d1da97e27d8e-4
Asynchronously version of run. results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]# Run query through Searx API and returns the results with metadata. Parameters query – The query to sear...
/content/https://python.langchain.com/en/latest/reference/modules/searx_search.html
d1da97e27d8e-5
categories – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. Returns The result of the query. Return type str Raises ValueError – If an error occured with the query. Example This will make a query to the qwant engine: from langchain.utilities import SearxSearchWrapper sear...
/content/https://python.langchain.com/en/latest/reference/modules/searx_search.html
09c7c6e2de63-0
.rst .pdf LLMs LLMs# Wrappers on top of large language models APIs. pydantic model langchain.llms.AI21[source]# Wrapper around AI21 large language models. To use, you should have the environment variable AI21_API_KEY set with your API key. Example from langchain.llms import AI21 ai21 = AI21(model="j2-jumbo-instruct") V...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-1
field maxTokens: int = 256# The maximum number of tokens to generate in the completion. field minTokens: int = 0# The minimum number of tokens to generate in the completion. field model: str = 'j2-jumbo-instruct'# Model name to use. field numResults: int = 1# How many completions to generate for each prompt. field pres...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-2
Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Confi...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-3
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_f...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-4
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.AlephAlpha[source...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-5
field contextual_control_threshold: Optional[float] = None# If set to None, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-None value, control parameters are also applied to similar tokens. field control_log_additive: Optional[bool] = True# True: ap...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-6
List of strings that may be generated without penalty, regardless of other penalty settings field penalty_exceptions_include_stop_sequences: Optional[bool] = None# Should stop_sequences be included in penalty_exceptions. field presence_penalty: float = 0.0# Penalizes repeated tokens. field raw_completion: bool = False#...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-7
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-8
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. gen...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-9
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-10
To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example Validators raise_warning » all fields set_callback_manager » callback_manager set_verbose » verbose validate_environment » all...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-11
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Crea...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-12
Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt va...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-13
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop:...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-14
Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import AzureOpenAI openai = AzureOpenAI(model_name="text-davinci-003") Validators build_extra » all fields set_callback_manager » callback_manager set_verbose » ...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-15
field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'text-davinci-003'# Model name to use. field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens. field...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-16
Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Confi...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-17
Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt va...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-18
Get the sub prompts for llm call. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: b...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-19
Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to sav...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-20
Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra » all fields set_callback_manager » callback_manager set_verbose » verbose validate_environment » all fields field model_key: str = ''# model endpoint to use field model_kw...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-21
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fie...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-22
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-23
To use, you should have the cerebrium python package installed, and the environment variable CEREBRIUMAI_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra » all fields set_callback_manager » ...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-24
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-25
Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-26
pydantic model langchain.llms.Cohere[source]# Wrapper around Cohere large language models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example from langchain.llms import Cohere cohere ...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-27
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-28
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. gen...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-29
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-30
To use, you should have the requests python package installed, and the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation and text2text-generation for now. Example from langchain.llms import DeepInfra di = DeepInfra(model_i...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-31
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-32
Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-33
pydantic model langchain.llms.ForefrontAI[source]# Wrapper around ForefrontAI large language models. To use, you should have the environment variable FOREFRONTAI_API_KEY set with your API key. Example from langchain.llms import ForefrontAI forefrontai = ForefrontAI(endpoint_url="") Validators set_callback_manager » cal...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-34
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Crea...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-35
Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt va...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-36
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forwar...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-37
Return logits for all tokens, not just the last token. field model: str [Required]# Path to the pre-trained GPT4All model file. field n_batch: int = 1# Batch size for prompt processing. field n_ctx: int = 512# Token context window. field n_parts: int = -1# Number of parts to split the model into. If -1, the number of p...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-38
Only load the vocabulary, no weights. __call__(prompt: str, stop: Optional[List[str]] = None) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_p...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-39
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep co...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-40
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-41
To use, you should have the openai python package installed, and the environment variable GOOSEAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example Validators build_extra » all fields set_callback_man...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-42
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.sche...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-43
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. gen...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-44
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-45
Wrapper around HuggingFaceHub Inference Endpoints. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation and text2text-generation for now. Exam...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html
09c7c6e2de63-46
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Crea...
/content/https://python.langchain.com/en/latest/reference/modules/llms.html