QnA
Empower your Q&A processes with ScribbleData's Q&A Agent. Dive into detailed documentation for effective question and answer capabilities in your applications
LLMIndexQuerier(name, cred={}, platform=defaults.LLM_PLATFORM, model=defaults.LLM_MODEL, embedding_model=defaults.LLM_EMBEDDING_MODEL, searchapi=defaults.LLM_SEARCH_API, statestore=defaults.LLM_STATE_STORE, memory_size=1000)
→
Bases: BaseLLMRAGAgent
, AgentEnablersMixin
Class to do querying using LLMs Query can be run against a specified set of documents that act as context to constrain the answers or against all the stored knowledge of the LLM model
init the LLM query agent name: name of the agent cred: credentials object platform: name of the platform backend to use default to OpenAI GPT model for now, and Azure as well will be extended in the future to suuport other models memory_size: how many tokens of memory to use when chatting with the LLM
Source code in llmsdk/agents/qna.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
extract_add_kg_entities(answer)
→
extract all KG entities, format as {entity: relation} and add to the current set of tracked KG entities
Source code in llmsdk/agents/qna.py
get_kg_entities()
→
return all the KG entities as a dict {entity: (relation, object)}
query(query, mode='internal', policy={})
→
run a query on an index using an llm chain object query: query string mode: 'internal' for querying over docset, 'external' for general query, 'suggest' for asking the LLM for alternate ways to pose the question policy: any extra params needed by the agent
Source code in llmsdk/agents/qna.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 |
|
run_query_external(query)
→
run a query using llm this is useful when looking for answers that generic llm can provide
Source code in llmsdk/agents/qna.py
run_query_internal(query)
→
run a query using llm on an internal docset indexed in index this is useful when looking for answers using a private source of data
Source code in llmsdk/agents/qna.py
run_query_kwords(context='')
→
run a query using llm on an internal docset indexed in index this is useful when looking for answers that generic llm can provide
Source code in llmsdk/agents/qna.py
run_query_search(query)
→
run a query using the search agent this is useful when looking for answers using a search engine
Source code in llmsdk/agents/qna.py
run_query_suggest(query)
→
run a query using llm to suggest other ways of asking the query, in the context of the chat history