clf = pipeline("question-answering")
clf({"context": "This is a sample context", "question": "What is the context here?"})
{'score': 0.4972594678401947, 'start': 8, 'end': 16, 'answer': 'a sample'}
Or with precompiled models as follows:
python
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForQuestionAnswering, pipeline
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
Loading the PyTorch checkpoint and converting to the neuron format by providing export=True
model = NeuronModelForQuestionAnswering.from_pretrained(
"deepset/roberta-base-squad2",
export=True
)
neuron_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
question = "What's my name?"
context = "My name is Philipp and I live in Nuremberg."
pred = neuron_qa(question=question, context=context)
*Relevant PR: 107*
Cache repo fix
The cache repo system was broken starting from Neuron 2.11.
*This release fixes that, the relevant PR is 119.*