Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: langchain-ai/langchainjs
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 0.1.18
Choose a base ref
...
head repository: langchain-ai/langchainjs
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 0.1.19
Choose a head ref
  • 19 commits
  • 34 files changed
  • 7 contributors

Commits on Feb 13, 2024

  1. Release 0.1.18

    jacoblee93 committed Feb 13, 2024
    Copy the full SHA
    e401311 View commit details
  2. Merge pull request #4403 from langchain-ai/release

    langchain[patch]: Release 0.1.18
    jacoblee93 authored Feb 13, 2024
    Copy the full SHA
    64c5609 View commit details
  3. Release 0.0.3

    jacoblee93 committed Feb 13, 2024
    Copy the full SHA
    53b71a6 View commit details
  4. Merge pull request #4404 from langchain-ai/release

    cloudflare[patch]: Release 0.0.3
    jacoblee93 authored Feb 13, 2024
    Copy the full SHA
    51dd22a View commit details

Commits on Feb 14, 2024

  1. Use onRunCreate (#4405)

    * Use onRunCreate
    
    * make public
    
    * Pass vals in update
    
    * lint
    hinthornw authored Feb 14, 2024
    Copy the full SHA
    3d852c1 View commit details

Commits on Feb 15, 2024

  1. Bump LangSmith versions (#4414)

    jacoblee93 authored Feb 15, 2024
    Copy the full SHA
    ce540d1 View commit details
  2. Make custom tools pass raw config to functions (#4419)

    jacoblee93 authored Feb 15, 2024
    Copy the full SHA
    9ab5e5f View commit details
  3. docs[patch]: Add warnings about SQL Injection for Postgres integratio…

    …ns (#4398)
    
    * bump pg types
    
    * escape identifiers in pgvector store
    
    * escape identifiers in postgres recordmanagers
    
    * double quotes around fk constraint name
    
    * revert changes
    
    * add warnings to docs
    MJDeligan authored Feb 15, 2024
    Copy the full SHA
    9a0415b View commit details
  4. Remove deprecated call of serializable.js (#4410)

    Use @langchain/core module instead.
    Michael Kesper authored Feb 15, 2024
    Copy the full SHA
    62c27c1 View commit details
  5. core[patch]: Add optional type param to JsonOutputParser (#4420)

    * Add optional type param to JsonOutputParser
    
    * Typing
    
    * Typing
    jacoblee93 authored Feb 15, 2024
    Copy the full SHA
    b1d0e2b View commit details

Commits on Feb 16, 2024

  1. Improve developer-facing evaluations API (#4370)

    * Merge `customEvaluators` and `evaluators`
    
    * Add RunnableTraceable
    
    * Fix import error
    
    * Add entrypoints
    
    * Add alias for evaluators
    
    * Add option to specify traceable function
    
    * Add config
    
    * Add criteria and labeled criteria helpers
    
    * Remove RunnableTraceable
    
    * Remove RunnableTraceable from core
    
    * Wrap traceable into a runnable
    
    * Accept any traceable function
    
    * Update internals
    
    * Avoid directly importing langsmith/traceable
    
    * Update to 0.1.1
    
    * Fix build
    
    * Bump deps to 0.1.1
    
    * Update docs
    
    * expose configurable feedbackKey
    
    * Handle single non-record object being passed properly
    
    * Still handle existing edge case
    
    * Fix formatting
    
    * Add missing feedbackKey
    dqbd authored Feb 16, 2024
    Copy the full SHA
    156ffa0 View commit details
  2. docs[minor]: Fix broken link used in quickstart (#4422)

    * fix: replace overview page with user guide page
    
    * Update other links
    
    * Fix lint
    
    ---------
    
    Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
    rogerthatdev and jacoblee93 authored Feb 16, 2024
    Copy the full SHA
    63c13f5 View commit details
  3. langchain[minor]: Couchbase document loader (#4364)

    * added couchbase document loader
    
    * fixed loader to use stringify
    
    * add doc file
    
    * updated tests
    
    * update types as per new requirement
    
    * update comments for typedoc
    
    * fix formatting issues and remove print in tests
    
    * Format
    
    ---------
    
    Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
    lokesh-couchbase and jacoblee93 authored Feb 16, 2024
    Copy the full SHA
    f310559 View commit details
  4. Release 0.1.29

    jacoblee93 committed Feb 16, 2024
    Copy the full SHA
    1d9b07d View commit details
  5. Merge pull request #4424 from langchain-ai/release

    core[patch]: Release 0.1.29
    jacoblee93 authored Feb 16, 2024
    Copy the full SHA
    983818a View commit details
  6. Bump core versions (#4425)

    jacoblee93 authored Feb 16, 2024
    Copy the full SHA
    f273d16 View commit details
  7. Release 0.0.29

    jacoblee93 committed Feb 16, 2024
    Copy the full SHA
    b0da02c View commit details
  8. Merge pull request #4426 from langchain-ai/release

    community[patch]: Release 0.0.29
    jacoblee93 authored Feb 16, 2024
    Copy the full SHA
    fc0586e View commit details
  9. Bump community (#4427)

    jacoblee93 authored Feb 16, 2024
    Copy the full SHA
    9828b2e View commit details
Showing with 765 additions and 209 deletions.
  1. +1 −1 docs/core_docs/docs/get_started/quickstart.mdx
  2. +8 −26 docs/core_docs/docs/guides/langsmith_evaluation.mdx
  3. +104 −0 docs/core_docs/docs/integrations/document_loaders/web_loaders/couchbase.mdx
  4. +5 −0 docs/core_docs/docs/integrations/vectorstores/analyticdb.mdx
  5. +5 −0 docs/core_docs/docs/integrations/vectorstores/pgvector.mdx
  6. +15 −11 docs/core_docs/docs/modules/agents/quick_start.mdx
  7. +14 −14 docs/core_docs/docs/use_cases/chatbots/quickstart.mdx
  8. +9 −9 docs/core_docs/docs/use_cases/chatbots/retrieval.mdx
  9. +1 −1 examples/package.json
  10. +3 −3 examples/src/agents/quickstart.ts
  11. +1 −1 examples/src/get_started/quickstart2.ts
  12. +1 −1 examples/src/get_started/quickstart3.ts
  13. +1 −1 examples/src/use_cases/chatbots/quickstart.ts
  14. +1 −1 examples/src/use_cases/chatbots/retrieval.ts
  15. +2 −2 langchain-core/package.json
  16. +8 −3 langchain-core/src/output_parsers/json.ts
  17. +49 −0 langchain-core/src/output_parsers/tests/json.test.ts
  18. +0 −1 langchain-core/src/runnables/base.ts
  19. +2 −2 langchain-core/src/runnables/passthrough.ts
  20. +13 −8 langchain-core/src/tools.ts
  21. +35 −1 langchain-core/src/tracers/tests/langchain_tracer.int.test.ts
  22. +15 −51 langchain-core/src/tracers/tracer_langchain.ts
  23. +4 −0 langchain/.gitignore
  24. +3 −0 langchain/langchain.config.js
  25. +22 −4 langchain/package.json
  26. +36 −0 langchain/src/document_loaders/tests/couchbase.int.test.ts
  27. +88 −0 langchain/src/document_loaders/web/couchbase.ts
  28. +1 −0 langchain/src/load/import_constants.ts
  29. +90 −4 langchain/src/smith/config.ts
  30. +135 −16 langchain/src/smith/runner_utils.ts
  31. +1 −1 libs/langchain-cloudflare/package.json
  32. +3 −3 libs/langchain-community/package.json
  33. +1 −1 libs/langchain-community/src/indexes/base.ts
  34. +88 −43 yarn.lock
2 changes: 1 addition & 1 deletion docs/core_docs/docs/get_started/quickstart.mdx
Original file line number Diff line number Diff line change
@@ -236,7 +236,7 @@ Then, use it like this:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/overview"
"https://docs.smith.langchain.com/user_guide"
);

const docs = await loader.load();
34 changes: 8 additions & 26 deletions docs/core_docs/docs/guides/langsmith_evaluation.mdx
Original file line number Diff line number Diff line change
@@ -194,31 +194,15 @@ const notUnsure = async ({
};
};

const evaluation: RunEvalConfig = {
// The 'evaluators' are loaded from LangChain's evaluation
// library.
evaluators: [
{
evaluatorType: "labeled_criteria",
criteria: "correctness",
feedbackKey: "correctness",
formatEvaluatorInputs: ({
rawInput,
rawPrediction,
rawReferenceOutput,
}) => {
return {
input: rawInput.input,
prediction: rawPrediction.output,
reference: rawReferenceOutput.output,
};
},
},
],
const evaluators: RunEvalConfig["evaluators"] = [
// LangChain's built-in evaluators
LabeledCriteria("correctness"),
Criteria("conciseness"),

// Custom evaluators can be user-defined RunEvaluator's
// or a compatible function
customEvaluators: [notUnsure],
};
notUnsure,
];
```

For predefined evaluators passed under the `evaluators` key, the passed `formatEvaluatorInputs` formats the raw input and output from the chain and example correctly.
@@ -239,9 +223,7 @@ The results will be visible in the LangSmith app.
```typescript
import { runOnDataset } from "langchain/smith";

await runOnDataset(agentExecutor, datasetName, {
evaluationConfig: evaluation,
});
await runOnDataset(agentExecutor, datasetName, evaluators);
```

```out
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
---
hide_table_of_contents: true
sidebar_class_name: node-only
---

# Couchbase

[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications.

This guide shows how to use load documents from couchbase database.

# Installation

```bash npm2yarn
npm install couchbase
```

## Usage

### Querying for Documents from Couchbase

For more details on connecting to a Couchbase cluster, please check the [Node.js SDK documentation](https://docs.couchbase.com/nodejs-sdk/current/howtos/managing-connections.html#connection-strings).

For help with querying for documents using SQL++ (SQL for JSON), please check the [documentation](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/index.html).

```typescript
import { CouchbaseDocumentLoader } from "langchain/document_loaders/web/couchbase";
import { Cluster } from "couchbase";

const connectionString = "couchbase://localhost"; // valid couchbase connection string
const dbUsername = "Administrator"; // valid database user with read access to the bucket being queried
const dbPassword = "Password"; // password for the database user

// query is a valid SQL++ query
const query = `
SELECT h.* FROM \`travel-sample\`.inventory.hotel h
WHERE h.country = 'United States'
LIMIT 1
`;
```

### Connect to Couchbase Cluster

```typescript
const couchbaseClient = await Cluster.connect(connectionString, {
username: dbUsername,
password: dbPassword,
configProfile: "wanDevelopment",
});
```

### Create the Loader

```typescript
const loader = new CouchbaseDocumentLoader(
couchbaseClient, // The connected couchbase cluster client
query // A valid SQL++ query which will return the required data
);
```

### Load Documents

You can fetch the documents by calling the `load` method of the loader. It will return a list with all the documents. If you want to avoid this blocking call, you can call `lazy_load` method that returns an Iterator.

```typescript
// using load method
docs = await loader.load();
console.log(docs);
```

```typescript
// using lazy_load
for await (const doc of this.lazyLoad()) {
console.log(doc);
break; // break based on required condition
}
```

### Specifying Fields with Content and Metadata

The fields that are part of the Document content can be specified using the `pageContentFields` parameter.
The metadata fields for the Document can be specified using the `metadataFields` parameter.

```typescript
const loaderWithSelectedFields = new CouchbaseDocumentLoader(
couchbaseClient,
query,
// pageContentFields
[
"address",
"name",
"city",
"phone",
"country",
"geo",
"description",
"reviews",
],
["id"] // metadataFields
);

const filtered_docs = await loaderWithSelectedFields.load();
console.log(filtered_docs);
```
5 changes: 5 additions & 0 deletions docs/core_docs/docs/integrations/vectorstores/analyticdb.mdx
Original file line number Diff line number Diff line change
@@ -44,6 +44,11 @@ npm install @langchain/openai @langchain/community

## Usage

::::danger Security
User-generated data such as usernames should not be used as input for the collection name.
**This may lead to SQL Injection!**
::::

import UsageExample from "@examples/indexes/vector_stores/analyticdb.ts";

<CodeBlock language="typescript">{UsageExample}</CodeBlock>
5 changes: 5 additions & 0 deletions docs/core_docs/docs/integrations/vectorstores/pgvector.mdx
Original file line number Diff line number Diff line change
@@ -36,6 +36,11 @@ You can find more information on how to setup `pgvector` in the [official reposi

## Usage

::::danger Security
User-generated data such as usernames should not be used as input for table and column names.
**This may lead to SQL Injection!**
::::

import Example from "@examples/indexes/vector_stores/pgvector_vectorstore/pgvector.ts";

One complete example of using `PGVectorStore` is the following:
26 changes: 15 additions & 11 deletions docs/core_docs/docs/modules/agents/quick_start.mdx
Original file line number Diff line number Diff line change
@@ -55,7 +55,7 @@ import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/overview"
"https://docs.smith.langchain.com/user_guide"
);
const rawDocs = await loader.load();

@@ -78,9 +78,9 @@ console.log(retrieverResult[0]);

/*
Document {
pageContent: "dataset uploading.Once we have a dataset, how can we use it to test changes to a prompt or chain? The most basic approach is to run the chain over the data points and visualize the outputs. Despite technological advancements, there still is no substitute for looking at outputs by eye. Currently, running the chain over the data points needs to be done client-side. The LangSmith client makes it easy to pull down a dataset and then run a chain over them, logging the results to a new project associated with the dataset. From there, you can review them. We've made it easy to assign feedback to runs and mark them as correct or incorrect directly in the web app, displaying aggregate statistics for each test project.We also make it easier to evaluate these runs. To that end, we've added a set of evaluators to the open-source LangChain library. These evaluators can be specified when initiating a test run and will evaluate the results once the test run completes. If we’re being honest, most of",
pageContent: "your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.Production​Closely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production. However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Monitoring and A/B Testing​LangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to",
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: { lines: [Object] }
}
}
@@ -232,7 +232,7 @@ console.log(result2);
{
"pageContent": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 11,
@@ -244,7 +244,7 @@ console.log(result2);
{
"pageContent": "the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 3,
@@ -256,7 +256,7 @@ console.log(result2);
{
"pageContent": "inputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 4,
@@ -268,7 +268,7 @@ console.log(result2);
{
"pageContent": "feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 11,
@@ -322,13 +322,17 @@ console.log(result2);
input: 'how can langsmith help with testing?',
output: 'LangSmith can help with testing in several ways:\n' +
'\n' +
'1. Debugging: LangSmith can be used to debug unexpected end results, agent loops, slow chains, and token usage. It helps in pinpointing underperforming data points and tracking performance over time.\n' +
'1. Initial Test Set: LangSmith allows developers to create datasets of inputs and reference outputs to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces.\n' +
'\n' +
'2. Monitoring: LangSmith can monitor applications by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for associating feedback programmatically with runs, which can be used to track performance over time.\n' +
"2. Comparison View: When making changes to your applications, LangSmith provides a comparison view to see whether you've regressed with respect to your initial test cases. This is helpful for evaluating changes in prompts, retrieval strategies, or model choices.\n" +
'\n' +
'3. Exporting Datasets: LangSmith makes it easy to curate datasets, which can be exported for use in other contexts such as OpenAI Evals or fine-tuning with FireworksAI.\n' +
'3. Monitoring and A/B Testing: LangSmith provides monitoring charts to track key metrics over time and allows for A/B testing changes in prompt, model, or retrieval strategy.\n' +
'\n' +
'Overall, LangSmith simplifies the process of testing changes, constructing datasets, and extracting insights from logged runs, making it a valuable tool for testing and evaluation.'
'4. Debugging: LangSmith offers tracing and debugging information at each step of an LLM sequence, making it easier to identify and root-cause issues when things go wrong.\n' +
'\n' +
'5. Beta Testing and Production: LangSmith enables the addition of runs as examples to datasets, expanding test coverage on real-world scenarios. It also provides monitoring for application performance with respect to latency, cost, and feedback scores at the production stage.\n' +
'\n' +
'Overall, LangSmith provides comprehensive testing and monitoring capabilities for LLM applications.'
}
*/
```
Loading