Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature request: functional directives #1234

Closed
bionicles opened this issue Nov 5, 2019 · 26 comments
Closed

feature request: functional directives #1234

bionicles opened this issue Nov 5, 2019 · 26 comments

Comments

@bionicles
Copy link

Hi,
I love the directive idea, but I'm not in love with OOP. We're just friends. Would it be possible to implement directives with functional style code?

Thanks in advance for any assistance,
Bion @ Bit Pharma

@yaacovCR
Copy link
Collaborator

Folding into #1306

This was referenced Mar 31, 2020
@yaacovCR
Copy link
Collaborator

yaacovCR commented Apr 3, 2020

Moving to v5.1, reopening to move any relevant discussion here.

@yaacovCR yaacovCR reopened this Apr 3, 2020
@yaacovCR
Copy link
Collaborator

yaacovCR commented Apr 3, 2020

See possible implementation suggestion here:

#1306 (comment)

@yaacovCR
Copy link
Collaborator

yaacovCR commented Apr 3, 2020

Copying here, as discussion should continue here:

But ideally, thinking more broadly, to allow directives specified on object fields to easily modify the input fields of the resolver arguments (see parallel approach https://github.com/profusion/apollo-validation-directives/blob/master/README.md#value-validation), we can think of a different architecture altogether that is more generic. The schema directive is meant to transform the schema, but perhaps a number of schema directives are meant to work together to modify a schema. A schema directive implementation should really be a function that takes a list of directive names (so users of schema directives can use whatever names they want, to prevent conflicts) and returns a function that takes a schema as an argument and returns a schema.

The implementation can rely on visitSchema or mapSchema to assess how the graphql objects are annotated and modify them accordingly.

An additional single exported helper would be useful to extract a map of the directives and their arguments from the AST

@bionicles
Copy link
Author

bionicles commented Apr 7, 2020

To crystallize this into code examples, let’s consider how to make an @authorize directive to provide fine-grained field-level access control.

One problem I found was, the Class is complicated, so you don’t want to make a bunch of them, so you make one @auth directive with a bunch of arguments ab options and it explodes with feature creep.

For example you might want some fields to be public, other fields to be available to an author and admins or to members of a group. Likewise you might want to reject functions based on data state, like if someone tries to buy something and doesn’t have the money.

The main appeal for me of functional directives, would be to reduce the amount and complexity of the directive code, so I could make a lot of little helper directives instead of one class with a method for

TLDR: if it’s simpler to make directives as little arrow functions

const canAfford = ({ user, item }) => user.balance > item.price ? 1 : 0;

Is a lot easier to write, maintain, and share with friends and stakeholders, than:


import { SchemaDirectiveVisitor } from "graphql-tools";

class DeprecatedDirective extends SchemaDirectiveVisitor {
  public visitFieldDefinition(field: GraphQLField<any, any>) {
    field.isDeprecated = true;
    field.deprecationReason = this.args.reason;
  }

  public visitEnumValue(value: GraphQLEnumValue) {
    value.isDeprecated = true;
    value.deprecationReason = this.args.reason;
  }
}

Since directives are the no. 1 most powerful way to do advanced stuff with graphql, and we all want to do schema-first development anyway, this would be a huge difference for everyone because we could build a library of cool directives and share them (cue exponential growth for Apollo)

All roads to great GraphQL run through directives. If it’s exponentially easier to make the directives, then the developer experience will be exponentially better

@yaacovCR
Copy link
Collaborator

yaacovCR commented Apr 7, 2020

Not sure if the way I am thinking about this aligns with your goal. I think of schema directives as providing hints to schema transformation functions, which can be quite complex.

You may be interested as well in the older directiveResolvers https://github.com/apollographql/graphql-tools/blob/master/docs/source/directive-resolvers.md

These are pretty simple ways to chain resolver functions.

@bionicles
Copy link
Author

Apollo users just want to put easily readable and maintainable security rules on data fields for fine-grained attribute based access control, link objects/fields with relationships, and the cleanest way to do it would be directives, but directives in Apollo Server right now are too complicated with too much OOP boilerplate, visual complexity, super long words. In fact, all of Apollo is way too complicated, but this part is particularly bad

That’s why I switched away from Apollo Server: the developer has to write a massive class just to put a one liner function like “allow authors to read their own posts”

Look at Hasura. Super easy to make graphql security rules. Then you come to Apollo, and you have to write 30 lines just to wrap the one line of code you care about.

Is it completely impossible to simplify directive implementation?

@yaacovCR
Copy link
Collaborator

Is it completely impossible to simplify directive implementation?

I hope not!

I think what you are looking for is functionality like the older directive resolvers, linked above, but with the ability to specify directive arguments.

But, I am not really sure...

I think the above discussion might be more fruitful if you would include what arguments you think should be provided to your functional directive and what you think it should return.

Are you thinking it should be provided the resolver type args like parent, args, context, info, and should return data, the functional directiveResolvers way?

Or are you thinking it should be provided a graphql field and return a new field?

Have you looked at the existing functional directiveResolvers option? Does that help you at all? If it helps somewhat, how does it fall short?

I don't work for Apollo, I contribute here in my spare time -- I am happy you have found in Hasura functionality that is working for you. Do you have a particular link to how that involves schema directives?

@yaacovCR
Copy link
Collaborator

I misspoke in one aspect, directiveResolvers already support arguments -- as per the docs, within your directiveResolver fn, you get access to the parent, the directive arguments, and the context. In the source, it appears that info is also passed.

It might be helpful actually to get access to the original args as well. That could be added pretty easily. One might think you can get that from info, but default arguments are not actually included, and nothing is parsed into internal values.

The implementation for directiveResolvers is now done in terms of the newer schemaResolvers and could be changes as follows:

  Object.keys(directiveResolvers).forEach((directiveName) => {
    schemaDirectives[directiveName] = class extends SchemaDirectiveVisitor {
      public visitFieldDefinition(field: GraphQLField<any, any>) {
        const resolver = directiveResolvers[directiveName];
        const originalResolver =
          field.resolve != null ? field.resolve : defaultFieldResolver;
        const directiveArgs = this.args;
        field.resolve = (...args) => {
//OLD:          const [source /* original args */, , context, info] = args;
          const [source,  fieldArgs, context, info] = args;
          return resolver(
            () =>
              new Promise((resolve, reject) => {
                const result = originalResolver.apply(field, args);
                if (result instanceof Error) {
                  reject(result);
                }
                resolve(result);
              }),
            source,
            fieldArgs, ///   <=== added this
            directiveArgs,
            context,
            info,
          );
        };
      }
    };
  });

  SchemaDirectiveVisitor.visitSchemaDirectives(schema, schemaDirectives);
}

Alternatively, you could just use the above as a blueprint for a schema directive.

@yaacovCR
Copy link
Collaborator

yaacovCR commented Apr 20, 2020

I suppose it might be nice (but more complicated) to also require passing arguments to the next function so that you could modify the source or arguments passed to the underlying resolver, akin to how graphql middleware works. That is also easily accomplished with some modification of the above.

@koresar
Copy link

koresar commented May 5, 2020

Directives as functions would be much preferred than the visitor object. So simple!
Please, make this happen.

yaacovCR added a commit that referenced this issue May 12, 2020
schema transform functions take a schema as an argument and return a schema, possibly modifying it

a schema transform function can be templated, i.e. one can create a function that takes certain arguments, such as the names of directives that annotate fields of interest, and return a schema transform function that modifies the schema based on the specified annotating directives

this allows one to reuse a schema transform function across projects, allowing for the customization of the schema transform function on a per use basis

schema transform functionc can be called directly on a schema to modify it before use -- makeExecutableSchema can now also be passed an array of schema transform functions, with makeExecutableSchema responsible for performing all schema transformations prior to returning.

Eventually, all schema visitor examples can be rewritten using the schema transform functional API

addresses #1234
@yaacovCR
Copy link
Collaborator

@barbieri @alexstrat

I wonder if you would be so kind as to comment on #1463 in terms of whether you might find the approach more helpful in terms of your validation directive work.

Could it be improved even further for that use case?

@yaacovCR
Copy link
Collaborator

@bionicles @koresar and all tracking this issue, your thoughts are welcome as well!

Description of approach can be seen within the included changes to documentation.

@yaacovCR
Copy link
Collaborator

@bionicles The simpler directive resolvers have been rewritten with mapSchema as well, I am not sure whether you have further improvements in mind as you seemed more interested in directive resolvers. Please feel free to address that or comment on any of the above.

yaacovCR added a commit that referenced this issue May 13, 2020
* rewrite attachDirectiveResolvers to be immutable

* add mapping by arguments

* add schema transforms

schema transform functions take a schema as an argument and return a schema, possibly modifying it

a schema transform function can be templated, i.e. one can create a function that takes certain arguments, such as the names of directives that annotate fields of interest, and return a schema transform function that modifies the schema based on the specified annotating directives

this allows one to reuse a schema transform function across projects, allowing for the customization of the schema transform function on a per use basis

schema transform functionc can be called directly on a schema to modify it before use -- makeExecutableSchema can now also be passed an array of schema transform functions, with makeExecutableSchema responsible for performing all schema transformations prior to returning.

Eventually, all schema visitor examples can be rewritten using the schema transform functional API

addresses #1234

* add generic field mapper to SchemaMapper to allow indiscriminate mapping of all fields and input fields

* add enum directive extraction and value mapping

* add tests demonstrating templated schema transforms

* update docs for schemaTransforms

include deprecated example in tests

reorder some types
@koresar
Copy link

koresar commented May 13, 2020

@yaacovCR I'm looking at the new docs of the #1463 PR. Maybe I missed it but my main concern is not answered there yet.

When I tried using the @constrained directive on an input String inside my Apollo2 project it dynamically created and leaked the ConstrainedString type (see this comment). The type was exposed to my GraphQL users. I don't want any new types in my schema.

I was trying to implement the @constrained directive myself but without having to create new types. It appeared to be literally impossible.

Question.
If I implement the @constrained directive myself using this new approach will I be forced to create any new leaking schema types?

@yaacovCR
Copy link
Collaborator

yaacovCR commented May 13, 2020

The validation directive included in the example does expose the new type.

To perform validation that differs depending on where the scalar type is employed, I believe one cannot use simple scalar parse functions, You could instead use output field resolver wrapping, an approach employed by @barbieri @alexstrat within their respective packages (see #789 and #842 (comment))

Their work -- I believe -- was complicated muchly by the class-based schema directive approach, which essentially only exposed hooks for modification on the annotated graphql object, rather on the entirety of the schema. I believe the functional approach adopted here would make their lives much easier, as a first pass of mapSchema could simply scan for graphql objects tagged with the appropriate directives, and a second scan could then allow them to perform the modification where necessary.

But I am eagerly awaiting their comments, because I am sure (just see my confusion within #789) that I am missing something with regard to this somewhat complicated topic.

Because of the complexity -- or just because some of it is clearly going over my head -- for now, the implementation of the resolver validation approach will be in userland rather than in graphql-tools proper. But, I hope that the functional approach within graphql-tools will be helpful to them and to the larger community.

I await further comments to find out for sure!

@alexstrat
Copy link
Contributor

@yaacovCR you summed up exactly the concern IMO and indeed #1463 makes things easier.

The key feature is the fact that schemaTransforms, compared to schemaDirectives, can visit not only directive-targeted nodes, but all nodes. That's what makes implementations of argument validation by field-resolver-wrapping look like a standard use-case of graphql-tools and not a hack.

The OOP API per-se was not a blocker, but the functional API seems less verbose which makes things probably easier to read.

Though, my opinion is that, the documentation, for the validation example, should now present the field-resolver-wrapping technique rather than the type/scalar hijacking technique — that has some disadvantages.

A basic use-case, like validation of field and only arguments (not input), is, with the new API, rather simple to write. More advanced use-cases (deep validation of arguments, input object, input field..) can be deferred to userland libraries.

@yaacovCR
Copy link
Collaborator

happy to accept PR to docs with new example to whatever depths of complexity you think appropriate, with links to appropriate packages completing the workflow

we should consider in general linking to popular external schemaDirective classes and potentially any new external schemaTransform libraries

@alexstrat
Copy link
Contributor

That being said, regarding graphql-field-arguments-coercion, I now realize that it relies on attaching a non-standard property coerce on type definitions via schema visiting. But patching types outside of GraphQL standard type config is not possible with schema transformation.

Anyway, I was not fan of the technique of patching type and it's probably possible to do differently and compatible with schema transformation.

OK noted for the doc! I might work on this when I'll adapt graphql-field-arguments-coercion to schema transformation.

@yaacovCR
Copy link
Collaborator

Thanks @alexstrat

We are doing our best to not decorate graphql objects with custom fields or at least put them under extensions property as per upstream graphql-js

I believe #1468 should help preserve backwards compatibility to some extent

@yaacovCR yaacovCR added the waiting-for-release Fixed/resolved, and waiting for the next stable release label May 13, 2020
@bionicles
Copy link
Author

bionicles commented May 13, 2020

I can dig through some old code and supply an example of what I was doing before: it was an @auth directive which allowed user to specify a list of functions as arguments.

The functions would take arguments:

request
context (especially the user)
data target

and return True or False (accept or reject) the request, or it could return “CRUD” for Create Read Update Delete

Basically, it lets you build a library of simple one liner security rules

It would be cool if we could quickly turn one liners like this into security directives to decorate fields

When user makes a request, you run all the rules and concatenate their decisions. Then you check to see if the request type is contained within the decisions. if an anonymous user requests to update an object which only devs can update, request is denied

const canAfford = ({user, target}) => user.balance >= target.price; 

const authorCRUD = ({user, target}) => user.id === target.authorId ? “CRUD” : “”;

const devCRUD = ({user}) => user.roles.includes(“dev”) ? “CRUD” : “”

const publicC = () => “C”

// then we can secure data like this:
email string @authorCRUD @publicC
const readACL = ({user,target}) => target.acl.includes(user.id) ? “R” : “”;
``` I

@yaacovCR
Copy link
Collaborator

Am happy to try to help with your particular use case, either on this issue or offline.

@bionicles
Copy link
Author

bionicles commented May 15, 2020

Here’s what I had before, for your reading pleasure. Almost surely possible to optimize this and particularly to remove the “eval” (although it only evaluates rules and not user supplied code)

Consider it MIT license

// authorize.js
import {
  pipe,
  prop,
  juxt,
  map,
  split,
  flip,
  includes,
  useWith,
  __,
  path,
  anyPass,
  ifElse,
  trim
} from "ramda";
import { SchemaDirectiveVisitor } from "apollo-server-express";
import { defaultFieldResolver } from "graphql";

const getDotPath = r => useWith(path(__, r), [split(".")]);
const makeRuleFn = rule => {
  const [conditionStr, opStr] = map(trim, split("?", rule));
  const hasOp = flip(includes)(split("", opStr));
  const evalString = `; ${conditionStr}`;
  return r => {
    const p = getDotPath(r);
    return eval(evalString) && hasOp(r);
  };
};
const makeCheckRulesFn = ruleStrings => {
  const ruleFns = map(makeRuleFn, ruleStrings);
  return pipe(juxt(ruleFns), anyPass);
};
const reject = root => {
  throw Error(`unauthorized op: ${prop("op", root)}`);
};

// apollographql.com/docs/apollo-server/schema/creating-directives/
export class AuthDirective extends SchemaDirectiveVisitor {
  visitFieldDefinition(field) {
    const { resolve = defaultFieldResolver } = field;
    const { rules } = this.args;
    const checkRules = makeCheckRulesFn(rules);
    // apollographql.com/docs/graphql-tools/resolvers/#resolver-function-signature
    field.resolve = async (parent, args, context) => {
      const item = context.redis.hgetall(args.id);
      const root = { parent, ...context, args, item };
      return ifElse(checkRules, resolve, reject)(root);
    };
  }
}
// type-defs.js
import { gql } from "apollo-server";

// define rule conditions
const AUTHOR = "p('item.author.id') === p('me.id')";
const DEV = "includes('devs', p('me.groups'))";
// define rules
const MAYBE_PUBLIC_READ = "!p('item.isPrivate') ? R";
// const AUTHOR_CRUD = `${AUTHOR} ? CRUD`;
const AUTHOR_READ = `${AUTHOR} ? R`;
// const DEV_CRUD = `${DEV} ? CRUD`;
const DEV_READ = `${DEV} ? R`;
// define rule sets
const FEEDBACK_RULES = `["1 ? C", "${DEV_READ}", "${AUTHOR_READ}", "${MAYBE_PUBLIC_READ}"]`;
const PERSON_RULES = `["1 ? C", "${DEV_READ}", "${AUTHOR_READ}"]`;
const DENY = `["0 ? CRUD"]`;
// define schema
export const typeDefs = gql`
  directive @auth(rules: [String]) on FIELD_DEFINITION
  type Person {
    id: ID! @auth(rules: ${DENY})
    name: String! @auth(rules: ${PERSON_RULES})
    email: String! @auth(rules: ${PERSON_RULES})
    groups: [String] @auth(rules: ${PERSON_RULES})
    signIns: [String] @auth(rules: ${PERSON_RULES})
    feedback: [Feedback] @auth(rules: ${PERSON_RULES})
  }
  type Feedback {
    id: ID! @auth(rules: ${DENY})
    author: Person @auth(rules: ${DENY}) 
    satisfied: Boolean! @auth(rules: ${FEEDBACK_RULES})
    score: Int! @auth(rules: ${FEEDBACK_RULES})
    feedback: String! @auth(rules: ${FEEDBACK_RULES})
    isPrivate: Boolean! @auth(rules: ${FEEDBACK_RULES})
  }
  type Mutation {
    createFeedback(
      authorId: ID
      satisfied: Boolean!
      score: Int!
      feedback: String!
      isPrivate: Boolean!
    ): Feedback
    me: Person
  }
  type Query {
    hello: String!,
    listFeedback: [Feedback]
  }
`;

@bionicles
Copy link
Author

bionicles commented May 15, 2020

Now with this change we can avoid writing “@auth(rules=“ 50 billion times

“p” is just a hack to do functional dot-based pathing on nested objects

“r” is the object with user, target, and request type

ardatan added a commit that referenced this issue May 21, 2020
* Move graphql-toolkit and switch to monorepo

* Update README

* Remove inspector

* Add yarn.lock back

* Add yarn.lock back

* Rename packages

* Update eslint config

* Seperate mocking

* Update docs

* Update npm badge

* Make highlight images smaller

* Update yarn.lock

* Move casual to mocking's devDeps

* Remove link from warning

* Seperate schema upload stuff

* Fix stuff

* Drop support for older versions

* Update eslint

* Add dateformat typings

* Use async spawn

* Promisify child spawn

* Use introspectSchema in url-loader

* Fix typings

* Fix renovate badge

* Improve release.js

* Seperate generator and stitching

* Fix deps

* Fix deps

* Add ardatan/graphql-toolkit#578

* Remove deepmerge and use mergeDeep from utils

* Remove aggregate-error dependency

* refactor(stitchSchemas): add support of all makeExecutableSchema options (#1424)

* refactor(stitchSchemas): to cover all makeExecutableSchema options

* run prettier

* drop parse() and parseLiteral()

* unstaged!

Co-authored-by: Arda TANRIKULU <20847995+ardatan@users.noreply.github.com>

* Move specific types back to their own packages

* Split schema-wrapping and schema-stitching (#1429)

* second attempt

* commit

* fix?

* fix

* Improve deps

Co-authored-by: Arda TANRIKULU <ardatanrikulu@gmail.com>

* shorter names

* Update simple-git

* rename stitching-upload to links (#1432)

* Remove unnecessary prepack script and use new Node

* Do tests for multiple graphql versions

* Do not use fsExtra in match-graphql.js

* Use same yarn cache

* Ensure different node versions use different cache

* Use different cache for each graphql and node versions

* Use fs-extra

* remove unused type, was used with updateEachKey

* move SchemaLikeObject to stitching package

type functions as type alias for all the things that can be passed to stitchSchemas 'schemas' parameter

* Split and refactor graphql import (#1437)

* Split & Refactor GraphQL Import

* fIX

* export toConfig functions not available in graphql-js (#1433)

* export toConfig functions not available in graphql-js

* avoid type,toConfig() workaround now that fieldToFieldConfig available

additional benefit of fieldToFieldConfig is that rootFieldTransformer and fieldTransform argumenst in TransformRootFields and TransformObjectFields expect functions that convert fields to fieldConfig, so now end users can easily create their own functions

* Fix imports

* Fix package.json

* Fix integration tests

* remove orphaned type

* refactor resolver types and streamline related functions (#1441)

* refactor resolver types and streamline related functions

= adds back ability to set enum type properties using double underscore convention, was present in v4.
= also removes functional resolvers from stitchSchemas, not necessary as mergeInfo exposed on info
= removes info.mergeInfo.delegate and info.mergeInfo.delegateToSchema
= removed object filter for resolvers map, non-objects should not be present.

* move fragment and selectionSet hints to stitch types package using declaration merging

* add tests demonstrating fix for 1434

Use inheritResolversFromInterfaces and use the interface to set a selectionSet hint

* move package-specific types out of utils

* replace fragment hints with selectionSet hints

selectionSet hints are less verbose and allow extension from interfaces

* Fix build

* Seperate test build and lint

* Improve github workflows

* Update workflow names

* Fix stitching from and to interfaces (#1443)

* ExpandAbstractTypes to check transformed subschema

An abstract type may be present in the target schema, but renamed. ExpandAbstractTypes expands the abstract types not present in the target schema, but it should check the transformed target schema, to not be misled by renaming.

This may require manually passing the transformed schema in some cases. The default can stay to assume no renaming.

Within stitched schemas, the correct transformed schema can be saved to and then read from info.mergeInfo.transformedSchema.

* move ICreateRequest to delegate package with ICreateRequestFromInfo

* Add WrapConcreteTypes transform

= fixes #751

* allow redelegation of nested root query fields

in some circumstances

nested root fields may not always successfully stitch to subschemas, for example when the root field is a new  abstract field stitching to a concrete field or the reverse

* provide transformedSchema argument in more instances

= changes CreateResolverFn signature to take an options argument of type ICreateResolverOptions

* add mapSchemaByField (#1447)

mapSchemaByField is a version of forEachField that does not mutate original schema

* Use field configs (#1451)

* remove spurious uses of fieldToFieldConfig

If type.toConfig() is later called returning config, fieldToFieldConfig is called every field to populate config.fields, so no need to call it individually from the field map, might as well just move the toConfig call earlier.

* remove need for fieldToFieldConfig function

BREAKING CHANGE: transformer functions will be passed field config objects instead of field objects.

fieldToFieldConfig was necessary because TransformRootField et al functions expected users to pass a function that receives a field and returns a field config, modified as desired. This can be made easier for end users by simply passing the transformer function a field config instead of a field, obviating the need for fieldToFieldConfig.

* replace mapSchemaByField with more MapperKind options within mapSchema

* hoist schema transformation out of type/field loop

this should be performed only once!

* don't mutate previous schemas with HoistField and WrapFields transforms

fields helper functions now take a schema and return a new schema, rather than mutating the original schema's typemap.

part of broader push throughout the code base to avoid mutating schemas

Note: specifically within the context of transforms, we are currently protected against any side effects when mutating the passed in schema in any case. cloneSchema is used during the transformation loop -- the original schema cannot be mutated by a transformation, because a clone is passed rather than the original. Nevertheless, these changes are part of the overall effort to never rely on the ability to mutate an existing graphql schema.

* remove use of createNamedStub

and add comment to explain how this works

* bring back TypeMap

* update lock file

* use improved mapSchema to make schema package functions not modify existing schema (#1456)

* refactor WrapFields

= add enhanceSchema function that can add/replace types/directives, see detailed comment with regard to motivation even for simply adding a new type (vs just using toConfig)

= simplify field helper functions

more complex modifyFields function is not necessary, empty types can be added and later extended so if multiples types with cross-referencing fields need to be added, the types can be added first without fields, and the fields appended later.

= appendFields currently adds fields to a new type if no existing type is found, this functionality perhaps should be made configurable or put in a separate function

* convert default values as leaf values internal values may change when mapping

this is necessary to eventually replace use of healSchema and forEachDefaultValue within AddResolversToSchema, although use of mapSchema will require that function to be immutable

* use TypeMap type

* rename functions

= appendObjectFields and removeObjectFields better describe that these functions only apply to fields of GraphQLObjectType

= enhanceSchema is a bit vague

* fix mapSchema typo

All input types mapped to input object types!

* continue mutable => immutable conversion

This change:
= bring default value updating into mapSchema

= change aaddResolversToSchema,
addCatchUndefinedToSchema, addErrorLoggingToSchema,
addSchemaLevelResolver to all use mapSchema to return a new schema without modifying the passed in schema

= change all internal use of these functions and all tests to use the return value

= pass typeName instead of a type to field mappers, as passing a type can be confusing, because the type may be in the process of being mapped and not equivalent to the type in the original schema

** the visitSchema function and the schema directive classes that use visitSchema are the next (final?) target for mutable => immutable conversion

* fix more docs

* prune unnecessary field mapper types

there are only really two types of fields, "composite fields" or "output fields" called "fields" in graphql-js on object and interfaces, and input fields on input objects.

The generic FieldMapper type is useful for internal function mapFields so that field mapping logic does not have to be duplicated between output and input fields because of the different types.

* Update contact link

* Schema transforms (#1463)

* rewrite attachDirectiveResolvers to be immutable

* add mapping by arguments

* add schema transforms

schema transform functions take a schema as an argument and return a schema, possibly modifying it

a schema transform function can be templated, i.e. one can create a function that takes certain arguments, such as the names of directives that annotate fields of interest, and return a schema transform function that modifies the schema based on the specified annotating directives

this allows one to reuse a schema transform function across projects, allowing for the customization of the schema transform function on a per use basis

schema transform functionc can be called directly on a schema to modify it before use -- makeExecutableSchema can now also be passed an array of schema transform functions, with makeExecutableSchema responsible for performing all schema transformations prior to returning.

Eventually, all schema visitor examples can be rewritten using the schema transform functional API

addresses #1234

* add generic field mapper to SchemaMapper to allow indiscriminate mapping of all fields and input fields

* add enum directive extraction and value mapping

* add tests demonstrating templated schema transforms

* update docs for schemaTransforms

include deprecated example in tests

reorder some types

* Fix absolute imports (#1470)

* restore v4 healSchema functionality (#1468)

Within v4, schema healing was less agressive and did not attempt to refresh the schemas private variables and correct the root types.

Reversion to the less agressive healSchema makes some sense.
Library users require healSchema because they are modifying the schema in place using visitSchema or visitSchemaDirectives. This lets them do some things that perhaps they shouldn't (e.g. create fields of different types with identical names), but it is unexpected for the more aggressive healing to actually fail where the less agressive healing works fine.

See #1462

This requires a rewrite of the existing Gatsby transforms which rely on the more aggressive healSchema within v5, but this is recommended in any case, as healSchema/visitSchema/visitSchemaDirectives are all now considered legacy in any case.

Instructions for do-it-yourself more aggressive healing are included within comments within the healSchema, which detail the deficiencies of the less aggressive approach rather than releasing the more aggressive healing functionality under an option or new function name. This is because library users are now encourages to not modify existing schemas in place using visitSchema/visitSchemaDirectives, and to instead use mapSchema.

* fix latest test (#1471)

compare original error as comparing GraphQL errors is trickier

* use static types (#1472)

* 6.0.0-alpha.1

* fix removeObjectFields helper (#1480)

it was selecting, not helping!

it turns out that removing all object fields leads to automatic type pruning -- this could be optionally disabled, but easier for users to just add a selectObjectFields helper and a modifyObjectFields helper that can remove and add fields in a single step

* Fix bare request delegation (#1479)

* add failing test

also moves delegation test to delegate package

* fix bare delegation

* drop requirement for info with bare delegation

* add explicit fieldNodes bare delegation test

* further improve bare delegation (#1482)

handle object fields
handle some errors

* simplify handleNull (#1483)

this function after prior refactoring was recursively passing along arugments without actually ever using them

* remove dead argument to unwrapResult (#1484)

* fixes

* Some improvements on deps

* Fix not printing some directive definitions ardatan/graphql-toolkit#611

* Stitching info (#1490)

* remove dead type

* rename mergeInfo to stitchingInfo

stash on extensions instead of annotating info argument

closes #1486

* make generic transforms generic (#1495)

closes #1485

* move StitchingInfo type to both delegate and stitch

in theory, the packages mutually depend on each other, but really they just both depend on the StitchingInfo type

the fragments field on StitchingInfo was deprecated when we switched to precompiling fragments/selectionSets i.e. when we switched from using ReplaceFieldWithFragment to AddReplacementFragments/AddReplacementSelectionSets

* prepackage mergedTypesSelectionSets into its own map on stitchingInfo

= removes need for further ".selectionSet" indexing within AddMergedTypeSelectionSets

= allows AddMergedTypeSelectionSets to be more generic, can accept a map of type names and any selection set

* Rename transforms to generic names

because they are actually more generic than they appear.

AddReplacementFragments => AddFragmentsByFields
AddReplacementSelectionSets => AddSelectionSetsByField
AddMergedTypeSelectionSets =>
AddSelectionSetsByType

ReplaceFIeldWithFragment is for raw string fragments, retaining old name even though it doesn't replace anything just to smooth backwards compatibility

* Update match-graphql.js

* Update typescript-eslint monorepo to v2.34.0 (#1502)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Trigger CI for this branch

* Fix workflow

* avoid infinite loop (#1505)

fixSchemaAst fixes the schema ast by adding a getter for the property which lazily calls buildSchema(printSchemaWithDirectives(schema) to rebuild the schema.

printSchemaWithDirectives, however, now checks to see if schema.astNode is defined, and so will end up calling itself. This can be avoided by making sure that the astNode is not defined as a getter.

fixSchemaAst could alternatively be changed to return a new schema ratehr than modifying the original schema.

* Update dependency simple-git to v2.5.0 (#1500)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Pin dependency graphql-upload to 10.0.0 (#1491)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/jest to v25.2.3 (#1492)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Fix ts error

* Update dependency typescript to v3.9.2 (#1501)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency lint-staged to v10.2.4 (#1498)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency apollo-server-express to v2.13.1 (#1497)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/node to v13.13.6 (#1494)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/lodash to v4.14.151 (#1493)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/supertest to v2.0.9 (#1496)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* v6.0.0-alpha.2

* guard against undefined directives (#1506)

in --some-- codebase locations!

closes #1486

* Update dependency @docusaurus/core to v2.0.0-alpha.55 (#1507)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/fs-extra to v9.0.1 (#1510)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/node to v13.13.7 (#1511)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/lodash to v4.14.152 (#1512)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency @types/node to v13.13.8 (#1514)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* Update dependency typescript to v3.9.3 (#1515)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

* support regenerating schema, type and directive object annotations (#1517)

within the graphql ecosystem, many users still annotate schemas, types, and
possibly directives with custom properties instead of using the
newer extensions property.

mapSchema should copy all custom properties when creating graphql
objects as possible.

* provide option to update schema in place (#1518)

for full backwards compatibility

reverts changes with respect to type annotation -- if types have already been modified in place, more straightforward to
just keep doing so

* Use Bob v1.0.0 (#1519)

* Add graphql-tools that reexports all other packages

* Fix naming collision

* Update dependency bob-the-bundler to v1.0.1 (#1520)

Co-authored-by: Renovate Bot <bot@renovateapp.com>

Co-authored-by: Yaacov Rydzinski <yaacovCR@gmail.com>
Co-authored-by: Kamil Kisiela <kamil.kisiela@gmail.com>
Co-authored-by: Dotan Simha <dotansimha@gmail.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Renovate Bot <bot@renovateapp.com>
@yaacovCR
Copy link
Collaborator

Closed by #1419.

@yaacovCR yaacovCR removed the waiting-for-release Fixed/resolved, and waiting for the next stable release label May 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants