[ad_1]
#LLM for novices
Perceive the fundamentals of brokers, instruments, and prompts and a few learnings alongside the way in which
Viewers: For these feeling overwhelmed with the large (but good) library…
I’d be mendacity if I mentioned I’ve obtained the complete LangChain library coated — in truth, I’m removed from it. However the buzz surrounding it was sufficient to shake me out of my writing hiatus and provides it a go 🚀.
The preliminary motivation was to see what was it that LangChain was including (on a sensible stage) that set it other than the chatbot I constructed final month utilizing the ChatCompletion.create()
perform from the openai
bundle. While doing so, I spotted I wanted to grasp the constructing blocks for LangChain first earlier than shifting on to the extra complicated elements.
That is what this text does. Heads-up although, this will likely be extra elements coming as I’m actually fascinated by the library and can proceed to discover to see what all may be constructed by means of it.
Let’s start by understanding the elemental constructing blocks of LangChain — i.e. Chains. If you happen to’d wish to comply with alongside, right here’s the GitHub repo.
What are chains in LangChain?
Chains are what you get by connecting a number of massive language fashions (LLMs) in a logical approach. (Chains may be constructed of entities apart from LLMs however for now, let’s follow this definition for simplicity).
OpenAI is a sort of LLM (supplier) that you should use however there are others like Cohere, Bloom, Huggingface, and many others.
Be aware: Just about most of those LLM suppliers will want you to request an API key with a purpose to use them. So ensure you do this earlier than continuing with the rest of this weblog. For instance:
import os
os.environ["OPENAI_API_KEY"] = "..."
P.S. I’m going to make use of OpenAI for this tutorial as a result of I’ve a key with credit that expire in a month’s time, however be happy to exchange it with every other LLM. The ideas coated right here will likely be helpful regardless.
Chains may be easy (i.e. Generic) or specialised (i.e. Utility).
- Generic — A single LLM is the best chain. It takes an enter immediate and the identify of the LLM after which makes use of the LLM for textual content era (i.e. output for the immediate). Right here’s an instance:
Let’s construct a fundamental chain — create a immediate and get a prediction
Immediate creation (utilizing PromptTemplate
) is a bit fancy in Lanchain however that is in all probability as a result of there are fairly a couple of other ways prompts may be created relying on the use case (we are going to cowl AIMessagePromptTemplate
,HumanMessagePromptTemplate
and many others. within the subsequent weblog submit). Right here’s a easy one for now:
from langchain.prompts import PromptTemplateimmediate = PromptTemplate(
input_variables=["product"],
template="What is an effective identify for an organization that makes {product}?",
)
print(immediate.format(product="podcast participant"))
# OUTPUT
# What is an effective identify for an organization that makes podcast participant?
Be aware: If you happen to require a number of input_variables
, as an example: input_variables=["product", "audience"]
for a template reminiscent of “What is an effective identify for an organization that makes {product} for {viewers}”
, it is advisable to do print(immediate.format(product="podcast participant", viewers="youngsters”)
to get the up to date immediate.
After you have constructed a immediate, we are able to name the specified LLM with it. To take action, we create an LLMChain
occasion (in our case, we use OpenAI
‘s massive language mannequin text-davinci-003
). To get the prediction (i.e. AI-generated textual content), we use run
perform with the identify of the product
.
from langchain.llms import OpenAI
from langchain.chains import LLMChainllm = OpenAI(
model_name="text-davinci-003", # default mannequin
temperature=0.9) #temperature dictates how whacky the output ought to be
llmchain = LLMChain(llm=llm, immediate=immediate)
llmchain.run("podcast participant")
# OUTPUT
# PodConneXion
If you happen to had multiple input_variables, then you definitely gained’t be capable to use run
. As an alternative, you’ll need to cross all of the variables as a dict
. For instance, llmchain({“product”: “podcast participant”, “viewers”: “youngsters”})
.
Be aware 1: In line with OpenAI, davinci
text-generation fashions are 10x costlier than their chat counterparts i.e gpt-3.5-turbo
, so I attempted to modify from a textual content mannequin to a chat mannequin (i.e. from OpenAI
to ChatOpenAI
) and the outcomes are just about the identical.
Be aware 2: You may see some tutorials utilizing OpenAIChat
as a substitute of ChatOpenAI
. The previous is deprecated and can not be supported and we’re supposed to make use of ChatOpenAI
.
from langchain.chat_models import ChatOpenAIchatopenai = ChatOpenAI(
model_name="gpt-3.5-turbo")
llmchain_chat = LLMChain(llm=chatopenai, immediate=immediate)
llmchain_chat.run("podcast participant")
# OUTPUT
# PodcastStream
This concludes our part on easy chains. It is very important word that we hardly ever use generic chains as standalone chains. Extra typically they’re used as constructing blocks for Utility chains (as we are going to see subsequent).
2. Utility — These are specialised chains, comprised of many LLMs to assist clear up a particular job. For instance, LangChain helps some end-to-end chains (reminiscent of AnalyzeDocumentChain
for summarization, QnA, and many others) and a few particular ones (reminiscent of GraphQnAChain
for creating, querying, and saving graphs). We’ll take a look at one particular chain known as PalChain
on this tutorial for digging deeper.
PAL stands for Programme Aided Language Model. PALChain
reads complicated math issues (described in pure language) and generates packages (for fixing the maths drawback) because the intermediate reasoning steps, however offloads the answer step to a runtime reminiscent of a Python interpreter.
To substantiate that is in truth true, we are able to examine the _call()
within the base code here. Below the hood, we are able to see this chain:
P.S. It’s a good apply to examine _call()
in base.py
for any of the chains in LangChain to see how issues are working underneath the hood.
from langchain.chains import PALChain
palchain = PALChain.from_math_prompt(llm=llm, verbose=True)
palchain.run("If my age is half of my dad's age and he's going to be 60 subsequent yr, what's my present age?")# OUTPUT
# > Coming into new PALChain chain...
# def answer():
# """If my age is half of my dad's age and he's going to be 60 subsequent yr, what's my present age?"""
# dad_age_next_year = 60
# dad_age_now = dad_age_next_year - 1
# my_age_now = dad_age_now / 2
# end result = my_age_now
# return end result
#
# > Completed chain.
# '29.5'
Note1: verbose
may be set to False
if you don’t want to see the intermediate step.
Now a few of you might be questioning — however what concerning the immediate? We definitely didn’t cross one as we did for the generic llmchain
we constructed. The actual fact is, it’s routinely loaded when utilizing .from_math_prompt()
. You’ll be able to test the default immediate utilizing palchain.immediate.template
or you’ll be able to instantly examine the immediate file here.
print(palchain.immediate.template)
# OUTPUT
# 'Q: Olivia has $23. She purchased 5 bagels for $3 every. How a lot cash does she have left?nn# answer in Python:nnndef answer():n """Olivia has $23. She purchased 5 bagels for $3 every. How a lot cash does she have left?"""n money_initial = 23n bagels = 5n bagel_cost = 3n money_spent = bagels * bagel_costn money_left = money_initial - money_spentn end result = money_leftn return resultnnnnnnQ: Michael had 58 golf balls. On tuesday, he misplaced 23 golf balls. On wednesday, he misplaced 2 extra. What number of golf balls did he have on the finish of wednesday?nn# answer in Python:nnndef answer():n """Michael had 58 golf balls. On tuesday, he misplaced 23 golf balls. On wednesday, he misplaced 2 extra. What number of golf balls did he have on the finish of wednesday?"""n golf_balls_initial = 58n golf_balls_lost_tuesday = 23n golf_balls_lost_wednesday = 2n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesdayn end result = golf_balls_leftn return resultnnnnnnQ: There have been 9 computer systems within the server room. 5 extra computer systems have been put in every day, from monday to thursday. What number of computer systems are actually within the server room?nn# answer in Python:nnndef answer():n """There have been 9 computer systems within the server room. 5 extra computer systems have been put in every day, from monday to thursday. What number of computer systems are actually within the server room?"""n computers_initial = 9n computers_per_day = 5n num_days = 4 # 4 days between monday and thursdayn computers_added = computers_per_day * num_daysn computers_total = computers_initial + computers_addedn end result = computers_totaln return resultnnnnnnQ: Shawn has 5 toys. For Christmas, he obtained two toys every from his mother and pop. What number of toys does he have now?nn# answer in Python:nnndef answer():n """Shawn has 5 toys. For Christmas, he obtained two toys every from his mother and pop. What number of toys does he have now?"""n toys_initial = 5n mom_toys = 2n dad_toys = 2n total_received = mom_toys + dad_toysn total_toys = toys_initial + total_receivedn end result = total_toysn return resultnnnnnnQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. What number of lollipops did Jason give to Denny?nn# answer in Python:nnndef answer():n """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. What number of lollipops did Jason give to Denny?"""n jason_lollipops_initial = 20n jason_lollipops_after = 12n denny_lollipops = jason_lollipops_initial - jason_lollipops_aftern end result = denny_lollipopsn return resultnnnnnnQ: Leah had 32 sweets and her sister had 42. In the event that they ate 35, what number of items have they got left in complete?nn# answer in Python:nnndef answer():n """Leah had 32 sweets and her sister had 42. In the event that they ate 35, what number of items have they got left in complete?"""n leah_chocolates = 32n sister_chocolates = 42n total_chocolates = leah_chocolates + sister_chocolatesn chocolates_eaten = 35n chocolates_left = total_chocolates - chocolates_eatenn end result = chocolates_leftn return resultnnnnnnQ: If there are 3 automobiles within the parking zone and a pair of extra automobiles arrive, what number of automobiles are within the parking zone?nn# answer in Python:nnndef answer():n """If there are 3 automobiles within the parking zone and a pair of extra automobiles arrive, what number of automobiles are within the parking zone?"""n cars_initial = 3n cars_arrived = 2n total_cars = cars_initial + cars_arrivedn end result = total_carsn return resultnnnnnnQ: There are 15 bushes within the grove. Grove staff will plant bushes within the grove immediately. After they're performed, there will likely be 21 bushes. What number of bushes did the grove staff plant immediately?nn# answer in Python:nnndef answer():n """There are 15 bushes within the grove. Grove staff will plant bushes within the grove immediately. After they're performed, there will likely be 21 bushes. What number of bushes did the grove staff plant immediately?"""n trees_initial = 15n trees_after = 21n trees_added = trees_after - trees_initialn end result = trees_addedn return resultnnnnnnQ: {query}nn# answer in Python:nnn'
Be aware: A lot of the utility chains may have their prompts pre-defined as a part of the library (test them out here). They’re, at instances, fairly detailed (learn: plenty of tokens) so there may be positively a trade-off between value and the standard of response from the LLM.
Are there any Chains that don’t want LLMs and prompts?
Despite the fact that PalChain requires an LLM (and a corresponding immediate) to parse the person’s query written in pure language, there are some chains in LangChain that don’t want one. These are primarily transformation chains that preprocess the immediate, reminiscent of eradicating further areas, earlier than inputting it into the LLM. You’ll be able to see one other instance here.
Can we get to the nice half and begin creating chains?
In fact, we are able to! We now have all the fundamental constructing blocks we have to begin chaining collectively LLMs logically such that enter from one may be fed to the subsequent. To take action, we are going to use SimpleSequentialChain
.
The documentation has some nice examples on this, for instance, you’ll be able to see here the right way to have two chains mixed the place chain#1 is used to wash the immediate (take away further whitespaces, shorten immediate, and many others) and chain#2 is used to name an LLM with this clear immediate. Right here’s another one the place chain#1 is used to generate a synopsis for a play and chain#2 is used to put in writing a evaluation based mostly on this synopsis.
Whereas these are glorious examples, I need to concentrate on one thing else. If you happen to bear in mind earlier than, I discussed that chains may be composed of entities apart from LLMs. Extra particularly, I’m interested by chaining brokers and LLMs collectively. However first, what are brokers?
Utilizing brokers for dynamically calling LLMs
It will likely be a lot simpler to clarify what an agent does vs. what it’s.
Say, we need to know the climate forecast for tomorrow. If have been to make use of the straightforward ChatGPT API and provides it a immediate Present me the climate for tomorrow in London
, it gained’t know the reply as a result of it doesn’t have entry to real-time information.
Wouldn’t or not it’s helpful if we had an association the place we might make the most of an LLM for understanding our question (i.e immediate) in pure language after which name the climate API on our behalf to fetch the information wanted? That is precisely what an agent does (amongst different issues, after all).
An agent has entry to an LLM and a collection of instruments for instance Google Search, Python REPL, math calculator, climate APIs, and many others.
There are fairly a couple of brokers that LangChain helps — see here for the whole checklist, however fairly frankly the commonest one I got here throughout in tutorials and YT movies was zero-shot-react-description
. This agent makes use of ReAct (Cause + Act) framework to choose essentially the most usable software (from an inventory of instruments), based mostly on what the enter question is.
P.S.: Here’s a pleasant article that goes in-depth into the ReAct framework.
Let’s initialize an agent utilizing initialize_agent
and cross it the instruments
and LLM
it wants. There’s an extended checklist of instruments obtainable here that an agent can use to work together with the skin world. For our instance, we’re utilizing the identical math-solving software as above, known as pal-math
. This one requires an LLM on the time of initialization, so we cross to it the identical OpenAI LLM occasion as earlier than.
from langchain.brokers import initialize_agent
from langchain.brokers import AgentType
from langchain.brokers import load_toolsllm = OpenAI(temperature=0)
instruments = load_tools(["pal-math"], llm=llm)
agent = initialize_agent(instruments,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
Let’s check it out on the identical instance as above:
agent.run("If my age is half of my dad's age and he's going to be 60 subsequent yr, what's my present age?")# OUTPUT
# > Coming into new AgentExecutor chain...
# I would like to determine my dad's present age after which divide it by two.
# Motion: PAL-MATH
# Motion Enter: What's my dad's present age if he's going to be 60 subsequent yr?
# Remark: 59
# Thought: I now know my dad's present age, so I can divide it by two to get my age.
# Motion: Divide 59 by 2
# Motion Enter: 59/2
# Remark: Divide 59 by 2 just isn't a legitimate software, strive one other one.
# Thought: I can use PAL-MATH to divide 59 by 2.
# Motion: PAL-MATH
# Motion Enter: Divide 59 by 2
# Remark: 29.5
# Thought: I now know the ultimate reply.
# Closing Reply: My present age is 29.5 years outdated.
# > Completed chain.
# 'My present age is 29.5 years outdated.'
Be aware 1: At every step, you’ll discover that an agent does one in all three issues — it both has an remark
, a thought
, or it takes an motion
. That is primarily because of the ReAct framework and the related immediate that the agent is utilizing:
print(agent.agent.llm_chain.immediate.template)
# OUTPUT
# Reply the next questions as greatest you'll be able to. You will have entry to the next instruments:
# PAL-MATH: A language mannequin that's actually good at fixing complicated phrase math issues. Enter ought to be a completely worded laborious phrase math drawback.# Use the next format:
# Query: the enter query it's essential to reply
# Thought: it's best to all the time take into consideration what to do
# Motion: the motion to take, ought to be one in all [PAL-MATH]
# Motion Enter: the enter to the motion
# Remark: the results of the motion
# ... (this Thought/Motion/Motion Enter/Remark can repeat N instances)
# Thought: I now know the ultimate reply
# Closing Reply: the ultimate reply to the unique enter query
# Start!
# Query: {enter}
# Thought:{agent_scratchpad}
Note2: You is likely to be questioning what’s the purpose of getting an agent to do the identical factor that an LLM can do. Some purposes would require not only a predetermined chain of calls to LLMs/different instruments, however probably an unknown chain that relies on the person’s enter [Source]. In these kinds of chains, there may be an “agent” which has entry to a collection of instruments.
As an example, here’s an instance of an agent that may fetch the right paperwork (from the vectorstores) for RetrievalQAChain
relying on whether or not the query refers to doc A or doc B.
For enjoyable, I attempted making the enter query extra complicated (utilizing Demi Moore’s age as a placeholder for Dad’s precise age).
agent.run("My age is half of my dad's age. Subsequent yr he's going to be similar age as Demi Moore. What's my present age?")
Sadly, the reply was barely off because the agent was not utilizing the newest age for Demi Moore (since Open AI fashions have been educated on information till 2020). This may be simply fastened by together with one other software —instruments = load_tools([“pal-math”, "serpapi"], llm=llm)
. serpapi
is helpful for answering questions on present occasions.
Be aware: It is very important add as many instruments as you suppose could also be related to the person question. The issue with utilizing a single software is that the agent retains making an attempt to make use of the identical software even when it’s not essentially the most related for a specific remark/motion step.
Right here’s one other instance of a software you should use — podcast-api
. It’s worthwhile to get your own API key and plug it into the code beneath.
instruments = load_tools(["podcast-api"], llm=llm, listen_api_key="...")
agent = initialize_agent(instruments,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)agent.run("Present me episodes for cash saving ideas.")
# OUTPUT
# > Coming into new AgentExecutor chain...
# I ought to seek for podcasts or episodes associated to cash saving
# Motion: Podcast API
# Motion Enter: Cash saving ideas
# Remark: The API name returned 3 podcasts associated to cash saving ideas: The Cash Nerds, The Rachel Cruze Present, and The Martin Lewis Podcast. These podcasts provide worthwhile cash saving ideas and recommendation to assist individuals take management of their funds and create a life they love.
# Thought: I now have some choices to select from
# Closing Reply: The Cash Nerds, The Rachel Cruze Present, and The Martin Lewis Podcast are nice podcast choices for cash saving ideas.
# > Completed chain.
# 'The Cash Nerds, The Rachel Cruze Present, and The Martin Lewis Podcast are nice podcast choices for cash saving ideas.'
Note1: There’s a known error with utilizing this API the place you may see, openai.error.InvalidRequestError: This mannequin’s most context size is 4097 tokens, nevertheless you requested XXX tokens (XX in your immediate; XX for the completion). Please cut back your immediate; or completion size.
This occurs when the response returned by the API is likely to be too massive. To work round this, the documentation suggests returning fewer search outcomes, for instance, by updating the query to "Present me episodes for cash saving ideas, return just one end result"
.
Note2: Whereas tinkering round with this software, I seen some inconsistencies. The responses aren’t all the time full the primary time round, as an example listed here are the enter and responses from two consecutive runs:
Enter: “Podcasts for getting higher at French”
Response 1: “The very best podcast for studying French is the one with the very best evaluation rating.”
Response 2: ‘The very best podcast for studying French is “FrenchPod101”.
Below the hood, the software is first utilizing an LLMChain for building the API URL based mostly on our enter directions (one thing alongside the traces of https://listen-api.listennotes.com/api/v2/search?q=french&type=podcast&page_size=3
) and making the API call. Upon receiving the response, it makes use of one other LLMChain that summarizes the response to get the reply to our unique query. You’ll be able to try the prompts here for each LLMchains which describe the method in additional element.
I’m inclined to guess the inconsistent outcomes seen above are ensuing from the summarization step as a result of I’ve individually debugged and examined the API URL (created by LLMChain#1) by way of Postman and acquired the fitting response. To additional affirm my doubts, I additionally stress-tested the summarization chain as a standalone chain with an empty API URL hoping it will throw an error however obtained the response “Investing’ podcasts have been discovered, containing 3 ends in complete.” 🤷♀ I’d be curious to see if others had higher luck than me with this software!
Use Case 2: Mix chains to create an age-appropriate present generator
Let’s put our data of brokers and sequential chaining to good use and create our personal sequential chain. We’ll mix:
- Chain #1 — The
agent
we simply created that may clear up age problems in math. - Chain #2 — An LLM that takes the age of an individual and suggests an applicable present for them.
# Chain1 - clear up math drawback, get the age
chain_one = agent# Chain2 - recommend age-appropriate present
template = """You're a present recommender. Given an individual's age,n
it's your job to recommend an applicable present for them.
Individual Age:
{age}
Counsel present:"""
prompt_template = PromptTemplate(input_variables=["age"], template=template)
chain_two = LLMChain(llm=llm, immediate=prompt_template)
Now that we have now each chains prepared we are able to mix them utilizing SimpleSequentialChain
.
from langchain.chains import SimpleSequentialChainoverall_chain = SimpleSequentialChain(
chains=[chain_one, chain_two],
verbose=True)
A few issues to notice:
- We’d like not explicitly cross
input_variables
andoutput_variables
forSimpleSequentialChain
because the underlying assumption is that the output from chain 1 is handed as enter to chain 2.
Lastly, we are able to run it with the identical math drawback as earlier than:
query = "If my age is half of my dad's age and he's going to be 60 subsequent yr, what's my present age?"
overall_chain.run(query)# OUTPUT
# > Coming into new SimpleSequentialChain chain...
# > Coming into new AgentExecutor chain...
# I would like to determine my dad's present age after which divide it by two.
# Motion: PAL-MATH
# Motion Enter: What's my dad's present age if he's going to be 60 subsequent yr?
# Remark: 59
# Thought: I now know my dad's present age, so I can divide it by two to get my age.
# Motion: Divide 59 by 2
# Motion Enter: 59/2
# Remark: Divide 59 by 2 just isn't a legitimate software, strive one other one.
# Thought: I would like to make use of PAL-MATH to divide 59 by 2.
# Motion: PAL-MATH
# Motion Enter: Divide 59 by 2
# Remark: 29.5
# Thought: I now know the ultimate reply.
# Closing Reply: My present age is 29.5 years outdated.
# > Completed chain.
# My present age is 29.5 years outdated.
# Given your age, an ideal present could be one thing that you should use and luxuriate in now like a pleasant bottle of wine, a luxurious watch, a cookbook, or a present card to a favourite retailer or restaurant. Or, you might get one thing that can final for years like a pleasant piece of bijou or a high quality leather-based pockets.
# > Completed chain.
# 'nGiven your age, an ideal present could be one thing that you should use and luxuriate in now like a pleasant bottle of wine, a luxurious watch, a cookbook, or a present card to a favourite retailer or restaurant. Or, you might get one thing that can final for years like a pleasant piece of bijou or a high quality leather-based pockets
There is likely to be instances when it is advisable to cross alongside some extra context to the second chain, along with what it’s receiving from the primary chain. As an example, I need to set a finances for the present, relying on the age of the individual that is returned by the primary chain. We are able to accomplish that utilizing SimpleMemory
.
First, let’s replace the immediate for chain_two
and cross to it a second variable known as finances
inside input_variables
.
template = """You're a present recommender. Given an individual's age,n
it's your job to recommend an applicable present for them. If age is underneath 10,n
the present ought to value not more than {finances} in any other case it ought to value atleast 10 instances {finances}.Individual Age:
{output}
Counsel present:"""
prompt_template = PromptTemplate(input_variables=["output", "budget"], template=template)
chain_two = LLMChain(llm=llm, immediate=prompt_template)
If you happen to evaluate the template
we had for SimpleSequentialChain
with the one above, you’ll discover that I’ve additionally up to date the primary enter’s variable identify from age
→ output
. This can be a essential step, failing which an error could be raised on the time of chain validation — Lacking required enter keys: {age}, solely had {enter, output, finances}
.
It is because the output from the primary entity within the chain (i.e. agent
) would be the enter for the second entity within the chain (i.e. chain_two
) and subsequently the variable names should match. Upon inspecting agent
’s output keys, we see that the output variable known as output
, therefore the replace.
print(agent.agent.llm_chain.output_keys)# OUTPUT
["output"]
Subsequent, let’s replace the type of chain we’re making. We are able to not work with SimpleSequentialChain
as a result of it solely works in circumstances the place this can be a single enter and single output. Since chain_two
is now taking two input_variables
, we have to use SequentialChain
which is tailor-made to deal with a number of inputs and outputs.
overall_chain = SequentialChain(
input_variables=["input"],
reminiscence=SimpleMemory(recollections={"finances": "100 GBP"}),
chains=[agent, chain_two],
verbose=True)
A few issues to notice:
- Not like
SimpleSequentialChain
, passinginput_variables
parameter is obligatory forSequentialChain
. It’s a checklist containing the identify of the enter variables that the primary entity within the chain (i.e.agent
in our case) expects.
Now a few of you might be questioning the right way to know the precise identify used within the enter immediate that theagent
goes to make use of. We definitely didn’t write the immediate for this agent (as we did forchain_two
)! It is really fairly simple to seek out it out by inspecting the immediate template of thellm_chain
that the agent is made up of.
print(agent.agent.llm_chain.immediate.template)# OUTPUT
#Reply the next questions as greatest you'll be able to. You will have entry to the next instruments:
#PAL-MATH: A language mannequin that's actually good at fixing complicated phrase math issues. Enter ought to be a completely worded laborious phrase math drawback.
#Use the next format:
#Query: the enter query it's essential to reply
#Thought: it's best to all the time take into consideration what to do
#Motion: the motion to take, ought to be one in all [PAL-MATH]
#Motion Enter: the enter to the motion
#Remark: the results of the motion
#... (this Thought/Motion/Motion Enter/Remark can repeat N instances)
#Thought: I now know the ultimate reply
#Closing Reply: the ultimate reply to the unique enter query
#Start!
#Query: {enter}
#Thought:{agent_scratchpad}
As you’ll be able to see towards the top of the immediate, the questions being requested by the end-user is saved in an enter variable by the identify enter
. If for some motive you needed to manipulate this identify within the immediate, ensure you are additionally updating the input_variables
on the time of the creation of SequentialChain
.
Lastly, you might have came upon the identical data with out going by means of the entire immediate:
print(agent.agent.llm_chain.immediate.input_variables)# OUTPUT
# ['input', 'agent_scratchpad']
SimpleMemory
is a simple option to retailer context or different bits of knowledge that shouldn’t ever change between prompts. It requires one parameter on the time of initialization —recollections
. You’ll be able to cross components to it indict
kind. As an example,SimpleMemory(recollections={“finances”: “100 GBP”})
.
Lastly, let’s run the brand new chain with the identical immediate as earlier than. You’ll discover, the ultimate output has some luxurious present suggestions reminiscent of weekend getaways in accordance with the upper finances in our up to date immediate.
overall_chain.run("If my age is half of my dad's age and he's going to be 60 subsequent yr, what's my present age?")# OUTPUT
#> Coming into new SequentialChain chain...
#> Coming into new AgentExecutor chain...
# I would like to determine my dad's present age after which divide it by two.
#Motion: PAL-MATH
#Motion Enter: What's my dad's present age if he's going to be 60 subsequent yr?
#Remark: 59
#Thought: I now know my dad's present age, so I can divide it by two to get my age.
#Motion: Divide 59 by 2
#Motion Enter: 59/2
#Remark: Divide 59 by 2 just isn't a legitimate software, strive one other one.
#Thought: I can use PAL-MATH to divide 59 by 2.
#Motion: PAL-MATH
#Motion Enter: Divide 59 by 2
#Remark: 29.5
#Thought: I now know the ultimate reply.
#Closing Reply: My present age is 29.5 years outdated.
#> Completed chain.
# For somebody of your age, present could be one thing that's each sensible and significant. Think about one thing like a pleasant watch, a chunk of bijou, a pleasant leather-based bag, or a present card to a favourite retailer or restaurant.nIf you will have a bigger finances, you might take into account one thing like a weekend getaway, a spa bundle, or a particular expertise.'}
#> Completed chain.
For somebody of your age, present could be one thing that's each sensible and significant. Think about one thing like a pleasant watch, a chunk of bijou, a pleasant leather-based bag, or a present card to a favourite retailer or restaurant.nIf you will have a bigger finances, you might take into account one thing like a weekend getaway, a spa bundle, or a particular expertise.'}
[ad_2]
Source link