Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support nested Attribute values (#376) #596

Closed
wants to merge 2 commits into from

Conversation

shengxil
Copy link

Related issue #376

Nested Attribute is useful for representing multi-level values. For example, HTTP headers:

'http': {
    'headers': [
        'x-forwarded-for': 'foo.bar'
        'keep-alive': 'timeout=5'
    ]
}

Currently such cases are usually represented by a period delimited key chain (e.g. 'http.headers.x-forwarded-for': 'foo.bar'). This does not work if any key contains a period itself. It is also difficult for SDKs to iterate the Attributes under a certain level.

This change allows nested Attributes by adding an Attribute value type - child Attribute. An attribute can also contain a homogeneous array of children attributes.

Why not map?

  1. Map is somehow a high level data structure. It has different definitions in different languages. In some languages putting a map value replaces the original value, while in other languages it becomes a sibling of the original value.
  2. Programming-wise, it is hard to keep the homogeneity in map values.
  • Actually, the OT spec never used map in any span data type

@Oberon00
Copy link
Member

HTTP headers can be represented using an attribute http.request.headers.HEADER-NAME per header.

@@ -376,8 +376,9 @@ An `Attribute` is defined by the following properties:
- (Required) The attribute key, which MUST be a non-`null` and non-empty string.
- (Required) The attribute value, which is either:
- A primitive type: string, boolean or numeric.
- An array of primitive type values. The array MUST be homogeneous,
i.e. it MUST NOT contain values of different types.
- A child Attribute
Copy link
Member

@Oberon00 Oberon00 May 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A single child attribute? That does not sound useful. So instead of Attribute(3), I could have Attribute(Attribute(Attribute(3)))? That's probably not what you meant to say.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think I understand: The attribute does include the key! Then I would allow the child attribute only in the array because Attribute(key=foo, value=Attribute(key=bar, value=42)) sounds useless.

Copy link
Author

@shengxil shengxil May 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Appreciate the feedback! Yeah that makes sense. Even if some use cases need a single child attribute, it can still be placed in the array. I'll updating the PR.

@tigrannajaryan
Copy link
Member

Note that due to upcoming Log Data model we will likely support maps and arrays for attribute values. Logs need this (most importantly for the body). It is very likely that this enhanced attributes capability will be applied to other signals as well, there is no reason to limit this to logs only.

- An array of primitive type values. The array MUST be homogeneous,
i.e. it MUST NOT contain values of different types.
- A child Attribute
- An array of primitive type values or children Attributes. The array MUST be
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The array restrictions (no nested arrays, no different types) were made to ease implementation and APIs in statically typed languages. This PR would mostly negate this, because by using an array of attributes you could have different values inside these attributes, including arrays.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would argue that the current homogeneous convention is made because some languages can't accept array with different member types. If we consider the whole Attribute as a firm type, and consider its value type and value as its property, then within an array all sibling Attribute are still the same type.

Programming-wise IMHO, Attribute encapsulates the data so from the array's point of view, the value type is invisible.

@Oberon00
Copy link
Member

Oberon00 commented May 12, 2020

This does not work if any key contains a period itself.

Note that attribute keys are still subject to (undecided) restrictions, see #504.

Also, I suggest the following: When parsing such an attribute (e.g. http.header.*) ignore any dots in the * part. Maybe even use a syntax like http.header:* (note the colon instead of the period) instead to make it clear that dots are ignored from now on.

@shengxil
Copy link
Author

HTTP headers can be represented using an attribute http.request.headers.HEADER-NAME per header.

It is feasible at the trace generating side (client), but not a good experience at the consumer side (tracer provider), especially for those deep nested data. For example, AWS traces has data like

AWS: [
    Lambda1: {
        Id: Id1,
        Name: Foo
    },
    EC2: {
        Id: Id2,
        Metadata: [
            Key1: Value1,
            Key2: Value2
        ]
    }
]

It becomes very difficult to parse the data if we use a list of flatten String keys. We have to do a lot of String joining/delimiting. Nested attribute or map will provide a much better flexibility on dealing with such scenario

The nested attribute support shall also apply to the span Resource (#579) and log (open-telemetry/oteps#97). A span may contain multiple resources from different origin.

@shengxil
Copy link
Author

This does not work if any key contains a period itself.

Note that attribute keys are still subject to (undecided) restrictions, see #504.

Also, I suggest the following: When parsing such an attribute (e.g. http.header.*) ignore any dots in the * part. Maybe even use a syntax like http.header:* (note the colon instead of the period) instead to make it clear that dots are ignored from now on.

IMHO the restriction makes it more controversial. If we don't allow period or colon in the key, we can't use it as a delimiter. If we allow it, a single key itself may contain a delimiter, which breaks the parser. Today we have a semantic HTTP keys that works well. But for feature scalability concern, I don't think banning period/colon in attribute key is a good idea.

On the other hand, nested attribute or map can mitigate this issue

@shengxil
Copy link
Author

Note that due to upcoming Log Data model we will likely support maps and arrays for attribute values. Logs need this (most importantly for the body). It is very likely that this enhanced attributes capability will be applied to other signals as well, there is no reason to limit this to logs only.

Thanks for bringing this up. I'm trying to represent the multi-level value here. I'm wondering if the log map support nested map. If yes, does every language support it?

@tigrannajaryan
Copy link
Member

Thanks for bringing this up. I'm trying to represent the multi-level value here. I'm wondering if the log map support nested map. If yes, does every language support it?

Data model requires that nested maps are supported for log body. Language implementations of don't exist yet.

@anuraaga
Copy link
Contributor

Another interesting part of the AWS trace model is metadata

https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html#api-segmentdocuments-metadata

Metadata allows users to store arbitrary JSON-like structures, which can provide a nice treelike UI, as opposed to the common two-column one for simple K-V pairs, and more structure for automated workflows.

One thought is that it's relatively easy to flatten a JSON-like structure, e.g, these nested attributes, into key-value pairs by using something like the dot delimiters, during export for backends that don't support rich attributes and they would still support the normal K-V pair functionality. But the reverse, unflattening K-V pairs with delimiter, is ambiguous unless restricting the key character values, so backends with rich attributes would just have to drop support. I guess making the instrumentation layer more general and simplifying in the exporters could support more use cases? Given the popularity of JSON even as a data storage format (e.g., MongoDB and similar stores), it seems like many backends could want to expose the richness in metadata.

The array restrictions (no nested arrays, no different types) were made to ease implementation and APIs in statically typed languages.

Supporting dynamic structure could definitely make language implementations more complex. But considering how prevalent JSON is, maybe it's not so bad since I guess pretty much all languages have a good story for handling it nowadays.

tigrannajaryan pushed a commit to tigrannajaryan/exp-otelproto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376

## Motivation

There are several reasons for this change:

- The API defines that attributes values
  [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows for nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to current state.

More importantly, performance critical applications can use Gogo ProtoBuf
generator (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogoproto proposed approach uses 40% less memory than the current schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is futureproof to adding new attribute types, has enough
flexibility to represent simple and complex attribute values for all telemetry
types and can be made fast by custom code generation for applications where it
matters.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simples approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse if in the future we need to add more data types to
attributes. This approach is not scalable for future needs and is semantically
wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field types (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This simplifies the schema however it results in significantly bigger AnyValue
in-memory. In vast majority of cases attribute values are strings. Integer and
boolean values are also used (although significantly less frequently than
strings). Floating point number, arrays and maps are likely going to be
diminishingly rare in the attributes. If we kept all these value types in
AnyValue we would pay the cost for all these fields although almost always only
string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also is not futureproof
and will become worse as we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord
attributes. This would allow to eliminate some of the requirements that we do
not yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/exp-otelproto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376

## Motivation

There are several reasons for this change:

- The API defines that attributes values
  [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows for nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to current state.

More importantly, performance critical applications can use Gogo ProtoBuf
generator (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogoproto proposed approach uses 40% less memory than the current schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is futureproof to adding new attribute types, has enough
flexibility to represent simple and complex attribute values for all telemetry
types and can be made fast by custom code generation for applications where it
matters.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simples approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse if in the future we need to add more data types to
attributes. This approach is not scalable for future needs and is semantically
wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field types (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This simplifies the schema however it results in significantly bigger AnyValue
in-memory. In vast majority of cases attribute values are strings. Integer and
boolean values are also used (although significantly less frequently than
strings). Floating point number, arrays and maps are likely going to be
diminishingly rare in the attributes. If we kept all these value types in
AnyValue we would pay the cost for all these fields although almost always only
string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also is not futureproof
and will become worse as we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord
attributes. This would allow to eliminate some of the requirements that we do
not yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376

## Motivation

There are several reasons for this change:

- The API defines that attributes values
  [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This simplifies the schema however it results in significantly bigger AnyValue
in-memory. In vast majority of cases attribute values are strings. Integer and
boolean values are also used, although significantly less frequently than
strings. Floating point number, arrays and maps are likely going to be
diminishingly rare in the attributes. If we keep all these value types in
AnyValue we will pay the cost for all these fields although almost always only
string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376
Resolves: open-telemetry#106

## Motivation

There are several reasons for this change:

- The API defines that attributes values
  [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This simplifies the schema however it results in significantly bigger AnyValue
in-memory. In vast majority of cases attribute values are strings. Integer and
boolean values are also used, although significantly less frequently than
strings. Floating point number, arrays and maps are likely going to be
diminishingly rare in the attributes. If we keep all these value types in
AnyValue we will pay the cost for all these fields although almost always only
string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376
Resolves: open-telemetry#106

## Motivation

There are several reasons for this change:

- The API defines that attributes values [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values. We need to add
  such capability.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This simplifies the schema however it results in significantly bigger AnyValue
in-memory. In vast majority of cases attribute values are strings. Integer and
boolean values are also used, although significantly less frequently than
strings. Floating point number, arrays and maps are likely going to be
diminishingly rare in the attributes. If we keep all these value types in
AnyValue we will pay the cost for all these fields although almost always only
string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376
Resolves: open-telemetry#106

## Motivation

There are several reasons for this change:

- The API defines that attributes values [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values. We need to add
  such capability.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This results in significantly bigger AnyValue in-memory. In vast majority of
cases attribute values of produced telemetry are strings (see e.g. semantic
conventions for proof). Integer and boolean values are also used, although
significantly less frequently than strings. Floating point number, arrays and
maps are likely going to be diminishingly rare in the attributes. If we keep all
these value types in AnyValue we will pay the cost for all these fields although
almost always only string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/exp-otelproto that referenced this pull request Jun 8, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376
Resolves: open-telemetry/opentelemetry-proto#106

## Motivation

There are several reasons for this change:

- The API defines that attributes values [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values. We need to add
  such capability.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting. AnyValue can
represent any data that can be represented in JSON.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This results in significantly bigger AnyValue in-memory. In vast majority of
cases attribute values of produced telemetry are strings (see e.g. semantic
conventions for proof). Integer and boolean values are also used, although
significantly less frequently than strings. Floating point number, arrays and
maps are likely going to be diminishingly rare in the attributes. If we keep all
these value types in AnyValue we will pay the cost for all these fields although
almost always only string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
tigrannajaryan pushed a commit to tigrannajaryan/opentelemetry-proto that referenced this pull request Jun 15, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376
Resolves: open-telemetry#106

## Motivation

There are several reasons for this change:

- The API defines that attributes values [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values. We need to add
  such capability.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This results in significantly bigger AnyValue in-memory. In vast majority of
cases attribute values of produced telemetry are strings (see e.g. semantic
conventions for proof). Integer and boolean values are also used, although
significantly less frequently than strings. Floating point number, arrays and
maps are likely going to be diminishingly rare in the attributes. If we keep all
these value types in AnyValue we will pay the cost for all these fields although
almost always only string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
bogdandrutu pushed a commit to open-telemetry/opentelemetry-proto that referenced this pull request Jun 15, 2020
## Summary

This adds support for arrays and maps to attribute values, including support for
nested values.

This is a breaking protocol change.

Resolves: open-telemetry/opentelemetry-specification#376
Resolves: #106

## Motivation

There are several reasons for this change:

- The API defines that attributes values [may contain arrays of values](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#set-attributes).
  However the protocol has no way of representing array values. We need to add
  such capability.

- We intend to support Log data type in the protocol, which also requires array
  values (it is a Log Data Model requirement). In addition, Log data type
  requires support of key-value lists (maps) as attribute values, including
  nested values.

- There are long-standing requests to support nested values, arrays and maps for
  attributes:
  open-telemetry/opentelemetry-specification#376
  open-telemetry/opentelemetry-specification#596

This change introduces AnyValue. AnyValue can represent arbitrary numeric,
boolean, string, arrays or maps of values and allows nesting.

AttributeKeyValue now uses AnyValue to store the "value" part.

Note: below "Current" refers to the state of the "master" branch before this
PR/commit is merged. "Proposed" refers to the schema suggested in this
PR/commit.

## Performance

This change has a negative impact on the performance when using canonical Go
ProtoBuf compiler (compared to current OTLP state):

```
BenchmarkEncode/Current/Trace/Attribs-8              	     813	   1479588 ns/op
BenchmarkEncode/Proposed/Trace/Attribs-8             	     417	   2873476 ns/op
BenchmarkEncode/OpenCensus/Trace/Attribs-8           	     162	   7354799 ns/op

BenchmarkDecode/Current/Trace/Attribs-8              	     460	   2646059 ns/op	 1867627 B/op	   36201 allocs/op
BenchmarkDecode/Proposed/Trace/Attribs-8             	     246	   4827671 ns/op	 2171734 B/op	   56209 allocs/op
BenchmarkDecode/OpenCensus/Trace/Attribs-8           	     154	   7560952 ns/op	 2775949 B/op	   76166 allocs/op
```

However, I do not think this is important for most applications. Serialization
CPU and Memory usage is going to be a tiny portion of consumed resources for
most applications, except certain specialized ones.

For the perspective I am also showing OpenCensus in the benchmark to make it
clear that we are still significantly faster than it despite becoming slower
compared to the current state.

More importantly, performance critical applications can use Gogo ProtoBuf
compiler (Collector does use it), which _gains_ performance due to this change:

```
BenchmarkEncode/Current(Gogo)/Trace/Attribs-8        	    1645	    705385 ns/op
BenchmarkEncode/Proposed(Gogo)/Trace/Attribs-8       	    1555	    698771 ns/op

BenchmarkDecode/Current(Gogo)/Trace/Attribs-8        	     537	   2241570 ns/op	 2139634 B/op	   36201 allocs/op
BenchmarkDecode/Proposed(Gogo)/Trace/Attribs-8       	     600	   2053120 ns/op	 1323287 B/op	   46205 allocs/op
```

With Gogo compiler proposed approach uses 40% less memory than the current
schema.

After considering all tradeoffs and alternates (see below) I believe this
proposal is the best overall approach for OTLP. It is idiomatic ProtoBuf, easy
to read and understand, is future-proof to adding new attribute types, has
enough flexibility to represent simple and complex attribute values for all
telemetry types and can be made fast by custom code generation for applications
where it matters using Gogo ProtoBuf compiler.

Note: all performance measurements are done for Go implementation only (although
it is expected that other languages should exhibit somewhat similar behavior).

## Alternates Considered

I also designed and benchmarked several alternate schemas, see below.

### Adding array value to AttributeKeyValue

This is the simplest approach. It doubles down on the current OTLP protocol
approach and simply adds "array_values" field to AttributeKeyValue, e.g.:

```proto
message AttributeKeyValue {
  // all existing fields here.

  // A list of values. "key" field of each element in the list is ignored.
  repeated AttributeKeyValue array_values = 7;
}
```

This eliminates the need to have a separate AnyValue message and has lower CPU
usage because it requires less indirections and less memory allocations per
value. However, this is semantically incorrect since the elements of the array
must actually be values not key-value pairs, which this schema violates. It also
uses more memory than the proposed approach:

```proto
BenchmarkEncode/Proposed/Trace/Attribs-8             	     400	   2869055 ns/op
BenchmarkEncode/MoreFieldsinAKV/Trace/Attribs-8      	     754	   1540978 ns/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     250	   4790010 ns/op	 2171741 B/op	   56209 allocs/op
BenchmarkDecode/MoreFieldsinAKV/Trace/Attribs-8      	     420	   2806918 ns/op	 2347827 B/op	   36201 allocs/op
```

It will become even worse memory-wise if in the future we need to add more data
types to attributes. This approach is not scalable for future needs and is
semantically wrong.

### Fat AnyValue instead of oneof.

In this approach AnyValue contains all possible field values (similarly to how
AttributeKeyValue is currently):

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    double double_value = 5;
    repeated AnyValue list_values = 6;
    repeated AttributeKeyValue kvlist_values = 7;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

This results in significantly bigger AnyValue in-memory. In vast majority of
cases attribute values of produced telemetry are strings (see e.g. semantic
conventions for proof). Integer and boolean values are also used, although
significantly less frequently than strings. Floating point number, arrays and
maps are likely going to be diminishingly rare in the attributes. If we keep all
these value types in AnyValue we will pay the cost for all these fields although
almost always only string value would be present.

Here are benchmarks comparing proposed schema and schema with fat AnyValue and
using string and integer attributes in spans:

```
BenchmarkEncode/Proposed/Trace/Attribs-8             	     415	   2894513 ns/op	  456866 B/op	   10005 allocs/op
BenchmarkEncode/FatAnyValue/Trace/Attribs-8          	     646	   1885003 ns/op	  385024 B/op	       1 allocs/op

BenchmarkDecode/Proposed/Trace/Attribs-8             	     247	   4872270 ns/op	 2171746 B/op	   56209 allocs/op
BenchmarkDecode/FatAnyValue/Trace/Attribs-8          	     343	   3423494 ns/op	 2988081 B/op	   46205 allocs/op
```

Memory usage with this approach is much higher and it also will become worse as
we add more types.

### AnyValue plus ExoticValue

This is based on fat AnyValue approach but rarely used value types are moved to
a separate ExoticValue message that may be referenced from AnyValue if needed:

```proto
message AnyValue {
    ValueType type = 1;
    bool bool_value = 2;
    string string_value = 3;
    int64 int_value = 4;
    ExoticValue exotic_value = 5;
}
message ExoticValue {
    double double_value = 1;
    repeated AnyValue array_values = 2;
    repeated AttributeKeyValue kvlist_values = 3;
}
message AttributeKeyValue {
    string key = 1;
    AnyValue value = 2;
}
```

While this improves the performance (particularly lowers memory usage for most
frequently used types of attributes) it is awkward and sacrifices too much
readability and usability for small performance gains. Also for the rare cases
it is slow and uses even more memory so its edge case behavior is not desirable.

### Using different schema for log data type

I also considered using a different message definition for LogRecord attributes
and Spans. This would allow to eliminate some of the requirements that we do not
yet formally have for Span attributes (particularly the need to have maps of
nested values).

However, this does not help much in terms of performance, makes Span and
LogRecord attributes non-interchangeable and significantly increases the bloat
of code in applications that need to work with both Spans and Log records.
@mtwo mtwo added the release:after-ga Not required before GA release, and not going to work on before GA label Jun 30, 2020
@Oberon00 Oberon00 linked an issue Jul 20, 2020 that may be closed by this pull request
@github-actions
Copy link

This PR was marked stale due to lack of activity. It will be closed in 7 days.

@SergeyKanzhelev
Copy link
Member

This PR being small in a text didn't receive a single approve. This may only mean no contributors needs this support.

Since this is the case, closing this PR. Perhaps it can be discussed after GA.

tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 17, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 28, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 28, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 30, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 30, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 30, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 30, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Nov 30, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Dec 14, 2022
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Feb 9, 2023
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Feb 9, 2023
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Feb 14, 2023
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Jan 10, 2024
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Jan 10, 2024
Resolves open-telemetry#376

Use cases where this is necessary or useful:

1. Specify more than one resource in the telemetry: open-telemetry#579
2. Data coming from external source, e.g. AWS Metadata: open-telemetry#596 (comment)
   or open-telemetry#376 (comment)
3. Capturing HTTP headers: open-telemetry#376 (comment)
4. Structured stack traces: open-telemetry#2841
5. Payloads as attributes: open-telemetry/oteps#219 (comment)

This is a draft PR to see what the change looks like.

If this PR is merged it will be nice to follow it up with:
- A standard way of flattening maps and nested objects when converting from OTLP
  to formats that don't support maps/nested objects.
- Recommendations for semantic conventions to use/not use complex objects.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:api Cross language API specification issue release:after-ga Not required before GA release, and not going to work on before GA spec:trace Related to the specification/trace directory Stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support map values and nested values for attributes
7 participants