67 KiB
Ajv: Another JSON Schema Validator
The fastest JSON Schema validator for node.js and browser with draft 6 support.
Using version 5
JSON Schema draft-06 is published.
Ajv version 5.0.0 that supports draft-06 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas).
Please note: To use Ajv with draft-04 schemas you need to explicitly add meta-schema to the validator instance:
ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json'));
Contents
- Performance
- Features
- Getting started
- Frequently Asked Questions
- Using in browser
- Command line interface
- Validation
- Modifying data during validation
- API
- Related packages
- Packages using Ajv
- Tests, Contributing, History, License
Performance
Ajv generates code using doT templates to turn JSON schemas into super-fast validation functions that are efficient for v8 optimization.
Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks:
- json-schema-benchmark - 50% faster than the second place
- jsck benchmark - 20-190% faster
- z-schema benchmark
- themis benchmark
Performace of different validators by json-schema-benchmark:
Features
- Ajv implements full JSON Schema draft 6 and draft 4 standards:
- all validation keywords (see JSON Schema validation keywords)
- full support of remote refs (remote schemas have to be added with
addSchema
or compiled to be available) - support of circular references between schemas
- correct string lengths for strings with unicode pairs (can be turned off)
- formats defined by JSON Schema draft 4 standard and custom formats (can be turned off)
- validates schemas against meta-schema
- supports browsers and nodejs 0.10-7.x
- asynchronous loading of referenced schemas during compilation
- "All errors" validation mode with option allErrors
- error messages with parameters describing error reasons to allow creating custom error messages
- i18n error messages support with ajv-i18n package
- filtering data from additional properties
- assigning defaults to missing properties and items
- coercing data to the types specified in
type
keywords - custom keywords
- draft-6 keywords
const
,contains
andpropertyNames
- draft-6 boolean schemas (
true
/false
as a schema to always pass/fail). - keywords
switch
,patternRequired
,formatMaximum
/formatMinimum
andformatExclusiveMaximum
/formatExclusiveMinimum
from JSON-schema extension proposals with ajv-keywords package - $data reference to use values from the validated data as values for the schema keywords
- asynchronous validation of custom formats and keywords
Currently Ajv is the only validator that passes all the tests from JSON Schema Test Suite (according to json-schema-benchmark, apart from the test that requires that 1.0
is not an integer that is impossible to satisfy in JavaScript).
Install
npm install ajv
Getting started
Try it in the node REPL: https://tonicdev.com/npm/ajv
The fastest validation call:
var Ajv = require('ajv');
var ajv = new Ajv(); // options can be passed, e.g. {allErrors: true}
var validate = ajv.compile(schema);
var valid = validate(data);
if (!valid) console.log(validate.errors);
or with less code
// ...
var valid = ajv.validate(schema, data);
if (!valid) console.log(ajv.errors);
// ...
or
// ...
ajv.addSchema(schema, 'mySchema');
var valid = ajv.validate('mySchema', data);
if (!valid) console.log(ajv.errorsText());
// ...
See API and Options for more details.
Ajv compiles schemas to functions and caches them in all cases (using schema serialized with json-stable-stringify or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again.
The best performance is achieved when using compiled functions returned by compile
or getSchema
methods (there is no additional function call).
Please note: every time validation function or ajv.validate
are called errors
property is overwritten. You need to copy errors
array reference to another variable if you want to use it later (e.g., in the callback). See Validation errors
Using in browser
You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle.
If you need to use Ajv in several bundles you can create a separate UMD bundle using npm run bundle
script (thanks to siddo420).
Then you need to load Ajv in the browser:
<script src="ajv.min.js"></script>
This bundle can be used with different module systems or creates global Ajv
if no module system is found.
The browser bundle is available on cdnjs.
Ajv is tested with these browsers:
Please note: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue #234).
Command line interface
CLI is available as a separate npm package ajv-cli. It supports:
- compiling JSON-schemas to test their validity
- BETA: generating standalone module exporting a validation function to be used without Ajv (using ajv-pack)
- migrate schemas to draft-06 (using json-schema-migrate)
- validating data file(s) against JSON-schema
- testing expected validity of data against JSON-schema
- referenced schemas
- custom meta-schemas
- files in JSON and JavaScript format
- all Ajv options
- reporting changes in data after validation in JSON-patch format
Validation keywords
Ajv supports all validation keywords from draft 4 of JSON-schema standard:
- type
- for numbers - maximum, minimum, exclusiveMaximum, exclusiveMinimum, multipleOf
- for strings - maxLength, minLength, pattern, format
- for arrays - maxItems, minItems, uniqueItems, items, additionalItems, contains
- for objects - maxProperties, minproperties, required, properties, patternProperties, additionalProperties, dependencies, propertyNames
- for all types - enum, const
- compound keywords - not, oneOf, anyOf, allOf
With ajv-keywords package Ajv also supports validation keywords from JSON Schema extension proposals for JSON-schema standard:
- switch - conditional validation with a sequence of if/then clauses
- patternRequired - like
required
but with patterns that some property should match. - formatMaximum, formatMinimum, formatExclusiveMaximum, formatExclusiveMinimum - setting limits for date, time, etc.
See JSON Schema validation keywords for more details.
Formats
The following formats are supported for string validation with "format" keyword:
- date: full-date according to RFC3339.
- time: time with optional time-zone.
- date-time: date-time from the same source (time-zone is mandatory).
date
,time
anddate-time
validate ranges infull
mode and only regexp infast
mode (see options). - uri: full uri with optional protocol.
- url: URL record.
- uri-template: URI template according to RFC6570
- email: email address.
- hostname: host name according to RFC1034.
- ipv4: IP address v4.
- ipv6: IP address v6.
- regex: tests whether a string is a valid regular expression by passing it to RegExp constructor.
- uuid: Universally Unique IDentifier according to RFC4122.
- json-pointer: JSON-pointer according to RFC6901.
- relative-json-pointer: relative JSON-pointer according to this draft.
There are two modes of format validation: fast
and full
. This mode affects formats date
, time
, date-time
, uri
, email
, and hostname
. See Options for details.
You can add additional formats and replace any of the formats above using addFormat method.
The option unknownFormats
allows to change the behaviour in case an unknown format is encountered - Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can whitelist specific format(s) to be ignored. See Options for details.
You can find patterns used for format validation and the sources that were used in formats.js.
$data reference
With $data
option you can use values from the validated data as the values for the schema keywords. See proposal for more information about how it works.
$data
reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems.
The value of "$data" should be a JSON-pointer to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a relative JSON-pointer (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema).
Examples.
This schema requires that the value in property smaller
is less or equal than the value in the property larger:
var schema = {
"properties": {
"smaller": {
"type": "number",
"maximum": { "$data": "1/larger" }
},
"larger": { "type": "number" }
}
};
var validData = {
smaller: 5,
larger: 7
};
This schema requires that the properties have the same format as their field names:
var schema = {
"additionalProperties": {
"type": "string",
"format": { "$data": "0#" }
}
};
var validData = {
'date-time': '1963-06-19T08:30:06.283185Z',
email: 'joe.bloggs@example.com'
}
$data
reference is resolved safely - it won't throw even if some property is undefined. If $data
resolves to undefined
the validation succeeds (with the exclusion of const
keyword). If $data
resolves to incorrect type (e.g. not "number" for maximum keyword) the validation fails.
$merge and $patch keywords
With the package ajv-merge-patch you can use the keywords $merge
and $patch
that allow extending JSON-schemas with patches using formats JSON Merge Patch (RFC 7396) and JSON Patch (RFC 6902).
To add keywords $merge
and $patch
to Ajv instance use this code:
require('ajv-merge-patch')(ajv);
Examples.
Using $merge
:
{
"$merge": {
"source": {
"type": "object",
"properties": { "p": { "type": "string" } },
"additionalProperties": false
},
"with": {
"properties": { "q": { "type": "number" } }
}
}
}
Using $patch
:
{
"$patch": {
"source": {
"type": "object",
"properties": { "p": { "type": "string" } },
"additionalProperties": false
},
"with": [
{ "op": "add", "path": "/properties/q", "value": { "type": "number" } }
]
}
}
The schemas above are equivalent to this schema:
{
"type": "object",
"properties": {
"p": { "type": "string" },
"q": { "type": "number" }
},
"additionalProperties": false
}
The properties source
and with
in the keywords $merge
and $patch
can use absolute or relative $ref
to point to other schemas previously added to the Ajv instance or to the fragments of the current schema.
See the package ajv-merge-patch for more information.
Defining custom keywords
The advantages of using custom keywords are:
- allow creating validation scenarios that cannot be expressed using JSON Schema
- simplify your schemas
- help bringing a bigger part of the validation logic to your schemas
- make your schemas more expressive, less verbose and closer to your application domain
- implement custom data processors that modify your data (
modifying
option MUST be used in keyword definition) and/or create side effects while the data is being validated
If a keyword is used only for side-effects and its validation result is pre-defined, use option valid: true/false
in keyword definition to simplify both generated code (no error handling in case of valid: true
) and your keyword functions (no need to return any validation result).
The concerns you have to be aware of when extending JSON-schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas.
You can define custom keywords with addKeyword method. Keywords are defined on the ajv
instance level - new instances will not have previously defined keywords.
Ajv allows defining keywords with:
- validation function
- compilation function
- macro function
- inline compilation function that should return code (as string) that will be inlined in the currently compiled schema.
Example. range
and exclusiveRange
keywords using compiled schema:
ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) {
var min = sch[0];
var max = sch[1];
return parentSchema.exclusiveRange === true
? function (data) { return data > min && data < max; }
: function (data) { return data >= min && data <= max; }
} });
var schema = { "range": [2, 4], "exclusiveRange": true };
var validate = ajv.compile(schema);
console.log(validate(2.01)); // true
console.log(validate(3.99)); // true
console.log(validate(2)); // false
console.log(validate(4)); // false
Several custom keywords (typeof, instanceof, range and propertyNames) are defined in ajv-keywords package - they can be used for your schemas and as a starting point for your own custom keywords.
See Defining custom keywords for more details.
Asynchronous schema compilation
During asynchronous compilation remote references are loaded using supplied function. See compileAsync
method and loadSchema
option.
Example:
var ajv = new Ajv({ loadSchema: loadSchema });
ajv.compileAsync(schema).then(function (validate) {
var valid = validate(data);
// ...
});
function loadSchema(uri) {
return request.json(uri).then(function (res) {
if (res.statusCode >= 400)
throw new Error('Loading error: ' + res.statusCode);
return res.body;
});
}
Please note: Option missingRefs
should NOT be set to "ignore"
or "fail"
for asynchronous compilation to work.
Asynchronous validation
Example in node REPL: https://tonicdev.com/esp/ajv-asynchronous-validation
You can define custom formats and keywords that perform validation asyncronously by accessing database or some service. You should add async: true
in the keyword or format defnition (see addFormat, addKeyword and Defining custom keywords).
If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have "$async": true
keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without $async
keyword Ajv will throw an exception during schema compilation.
Please note: all asynchronous subschemas that are referenced from the current or other schemas should have "$async": true
keyword as well, otherwise the schema compilation will fail.
Validation function for an asynchronous custom format/keyword should return a promise that resolves with true
or false
(or rejects with new Ajv.ValidationError(errors)
if you want to return custom errors from the keyword function). Ajv compiles asynchronous schemas to either es7 async functions that can optionally be transpiled with nodent or with regenerator or to generator functions that can be optionally transpiled with regenerator as well. You can also supply any other transpiler as a function. See Options.
The compiled validation function has $async: true
property (if the schema is asynchronous), so you can differentiate these functions if you are using both syncronous and asynchronous schemas.
If you are using generators, the compiled validation function can be either wrapped with co (default) or returned as generator function, that can be used directly, e.g. in koa 1.0. co
is a small library, it is included in Ajv (both as npm dependency and in the browser bundle).
Async functions are currently supported in Chrome 55, Firefox 52, Node 7 (with --harmony-async-await) and MS Edge 13 (with flag).
Generator functions are currently supported in Chrome, Firefox and node.js.
If you are using Ajv in other browsers or in older versions of node.js you should use one of available transpiling options. All provided async modes use global Promise class. If your platform does not have Promise you should use a polyfill that defines it.
Validation result will be a promise that resolves with validated data or rejects with an exception Ajv.ValidationError
that has the array of validation errors in errors
property.
Example:
/**
* Default mode is non-transpiled generator function wrapped with `co`.
* Using package ajv-async (https://github.com/epoberezkin/ajv-async)
* you can auto-detect the best async mode.
* In this case, without "async" and "transpile" options
* (or with option {async: true})
* Ajv will choose the first supported/installed option in this order:
* 1. native async function
* 2. native generator function wrapped with co
* 3. es7 async functions transpiled with nodent
* 4. es7 async functions transpiled with regenerator
*/
var setupAsync = require('ajv-async');
var ajv = setupAsync(new Ajv);
ajv.addKeyword('idExists', {
async: true,
type: 'number',
validate: checkIdExists
});
function checkIdExists(schema, data) {
return knex(schema.table)
.select('id')
.where('id', data)
.then(function (rows) {
return !!rows.length; // true if record is found
});
}
var schema = {
"$async": true,
"properties": {
"userId": {
"type": "integer",
"idExists": { "table": "users" }
},
"postId": {
"type": "integer",
"idExists": { "table": "posts" }
}
}
};
var validate = ajv.compile(schema);
validate({ userId: 1, postId: 19 }))
.then(function (data) {
console.log('Data is valid', data); // { userId: 1, postId: 19 }
})
.catch(function (err) {
if (!(err instanceof Ajv.ValidationError)) throw err;
// data is invalid
console.log('Validation errors:', err.errors);
});
Using transpilers with asyncronous validation functions.
To use a transpiler you should separately install it (or load its bundle in the browser).
Ajv npm package includes minified browser bundles of regenerator and nodent in dist folder.
Using nodent
var setupAsync = require('ajv-async');
var ajv = new Ajv({ /* async: 'es7', */ transpile: 'nodent' });
setupAsync(ajv);
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
npm install nodent
or use nodent.min.js
from dist folder of npm package.
Using regenerator
var setupAsync = require('ajv-async');
var ajv = new Ajv({ /* async: 'es7', */ transpile: 'regenerator' });
setupAsync(ajv);
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
npm install regenerator
or use regenerator.min.js
from dist folder of npm package.
Using other transpilers
var ajv = new Ajv({ async: 'es7', processCode: transpileFunc });
var validate = ajv.compile(schema); // transpiled es7 async function
validate(data).then(successFunc).catch(errorFunc);
See Options.
Comparison of async modes
mode | transpile speed* |
run-time speed* |
bundle size |
---|---|---|---|
es7 async (native) |
- | 0.75 | - |
generators (native) |
- | 1.0 | - |
es7.nodent | 1.35 | 1.1 | 215Kb |
es7.regenerator | 1.0 | 2.7 | 1109Kb |
regenerator | 1.0 | 3.2 | 1109Kb |
* Relative performance in node v.7, smaller is better.
nodent has several advantages:
- much smaller browser bundle than regenerator
- almost the same performance of generated code as native generators in nodejs and the latest Chrome
- much better performace than native generators in other browsers
- works in IE 9 (regenerator does not)
Filtering data
With option removeAdditional
(added by andyscott) you can filter data during the validation.
This option modifies original data.
Example:
var ajv = new Ajv({ removeAdditional: true });
var schema = {
"additionalProperties": false,
"properties": {
"foo": { "type": "number" },
"bar": {
"additionalProperties": { "type": "number" },
"properties": {
"baz": { "type": "string" }
}
}
}
}
var data = {
"foo": 0,
"additional1": 1, // will be removed; `additionalProperties` == false
"bar": {
"baz": "abc",
"additional2": 2 // will NOT be removed; `additionalProperties` != false
},
}
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 0, "bar": { "baz": "abc", "additional2": 2 }
If removeAdditional
option in the example above were "all"
then both additional1
and additional2
properties would have been removed.
If the option were "failing"
then property additional1
would have been removed regardless of its value and property additional2
would have been removed only if its value were failing the schema in the inner additionalProperties
(so in the example above it would have stayed because it passes the schema, but any non-number would have been removed).
Please note: If you use removeAdditional
option with additionalProperties
keyword inside anyOf
/oneOf
keywords your validation can fail with this schema, for example:
{
"type": "object",
"oneOf": [
{
"properties": {
"foo": { "type": "string" }
},
"required": [ "foo" ],
"additionalProperties": false
},
{
"properties": {
"bar": { "type": "integer" }
},
"required": [ "bar" ],
"additionalProperties": false
}
]
}
The intention of the schema above is to allow objects with either the string property "foo" or the integer property "bar", but not with both and not with any other properties.
With the option removeAdditional: true
the validation will pass for the object { "foo": "abc"}
but will fail for the object {"bar": 1}
. It happens because while the first subschema in oneOf
is validated, the property bar
is removed because it is an additional property according to the standard (because it is not included in properties
keyword in the same schema).
While this behaviour is unexpected (issues #129, #134), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way:
{
"type": "object",
"properties": {
"foo": { "type": "string" },
"bar": { "type": "integer" }
},
"additionalProperties": false,
"oneOf": [
{ "required": [ "foo" ] },
{ "required": [ "bar" ] }
]
}
The schema above is also more efficient - it will compile into a faster function.
Assigning defaults
With option useDefaults
Ajv will assign values from default
keyword in the schemas of properties
and items
(when it is the array of schemas) to the missing properties and items.
This option modifies original data.
Please note: by default the default value is inserted in the generated validation code as a literal (starting from v4.0), so the value inserted in the data will be the deep clone of the default in the schema.
If you need to insert the default value in the data by reference pass the option useDefaults: "shared"
.
Inserting defaults by reference can be faster (in case you have an object in default
) and it allows to have dynamic values in defaults, e.g. timestamp, without recompiling the schema. The side effect is that modifying the default value in any validated data instance will change the default in the schema and in other validated data instances. See example 3 below.
Example 1 (default
in properties
):
var ajv = new Ajv({ useDefaults: true });
var schema = {
"type": "object",
"properties": {
"foo": { "type": "number" },
"bar": { "type": "string", "default": "baz" }
},
"required": [ "foo", "bar" ]
};
var data = { "foo": 1 };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 1, "bar": "baz" }
Example 2 (default
in items
):
var schema = {
"type": "array",
"items": [
{ "type": "number" },
{ "type": "string", "default": "foo" }
]
}
var data = [ 1 ];
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // [ 1, "foo" ]
Example 3 (inserting "defaults" by reference):
var ajv = new Ajv({ useDefaults: 'shared' });
var schema = {
properties: {
foo: {
default: { bar: 1 }
}
}
}
var validate = ajv.compile(schema);
var data = {};
console.log(validate(data)); // true
console.log(data); // { foo: { bar: 1 } }
data.foo.bar = 2;
var data2 = {};
console.log(validate(data2)); // true
console.log(data2); // { foo: { bar: 2 } }
default
keywords in other cases are ignored:
- not in
properties
oritems
subschemas - in schemas inside
anyOf
,oneOf
andnot
(see #42) - in
if
subschema ofswitch
keyword - in schemas generated by custom macro keywords
Coercing data types
When you are validating user inputs all your data properties are usually strings. The option coerceTypes
allows you to have your data types coerced to the types specified in your schema type
keywords, both to pass the validation and to use the correctly typed data afterwards.
This option modifies original data.
Please note: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won't be updated because scalars are passed by value.
Example 1:
var ajv = new Ajv({ coerceTypes: true });
var schema = {
"type": "object",
"properties": {
"foo": { "type": "number" },
"bar": { "type": "boolean" }
},
"required": [ "foo", "bar" ]
};
var data = { "foo": "1", "bar": "false" };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": 1, "bar": false }
Example 2 (array coercions):
var ajv = new Ajv({ coerceTypes: 'array' });
var schema = {
"properties": {
"foo": { "type": "array", "items": { "type": "number" } },
"bar": { "type": "boolean" }
}
};
var data = { "foo": "1", "bar": ["false"] };
var validate = ajv.compile(schema);
console.log(validate(data)); // true
console.log(data); // { "foo": [1], "bar": false }
The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of "anyOf" and other compound keywords).
See Coercion rules for details.
API
new Ajv(Object options) -> Object
Create Ajv instance.
.compile(Object schema) -> Function<Object data>
Generate validating function and cache the compiled schema for future use.
Validating function returns boolean and has properties errors
with the errors from the last validation (null
if there were no errors) and schema
with the reference to the original schema.
Unless the option validateSchema
is false, the schema will be validated against meta-schema and if schema is invalid the error will be thrown. See options.
.compileAsync(Object schema [, Boolean meta] [, Function callback]) -> Promise
Asyncronous version of compile
method that loads missing remote schemas using asynchronous function in options.loadSchema
. This function returns a Promise that resolves to a validation function. An optional callback passed to compileAsync
will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and callback if passed will be called with an error) when:
- missing schema can't be loaded (
loadSchema
returns the Promise that rejects). - the schema containing missing reference is loaded, but the reference cannot be resolved.
- schema (or some referenced schema) is invalid.
The function compiles schema and loads the first missing schema (or meta-schema), until all missing schemas are loaded.
You can asynchronously compile meta-schema by passing true
as the second parameter.
See example in Asynchronous compilation.
.validate(Object schema|String key|String ref, data) -> Boolean
Validate data using passed schema (it will be compiled and cached).
Instead of the schema you can use the key that was previously passed to addSchema
, the schema id if it was present in the schema or any previously resolved reference.
Validation errors will be available in the errors
property of Ajv instance (null
if there were no errors).
Please note: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later.
If the schema is asynchronous (has $async
keyword on the top level) this method returns a Promise. See Asynchronous validation.
.addSchema(Array<Object>|Object schema [, String key])
Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole.
Array of schemas can be passed (schemas should have ids), the second parameter will be ignored.
Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key.
Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data.
Although addSchema
does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time.
By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by validateSchema
option.
.addMetaSchema(Array<Object>|Object schema [, String key])
Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of addSchema
because there may be instance options that would compile a meta schema incorrectly (at the moment it is removeAdditional
option).
There is no need to explicitly add draft 6 meta schema (http://json-schema.org/draft-06/schema and http://json-schema.org/schema) - it is added by default, unless option meta
is set to false
. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See validateSchema
.
.validateSchema(Object schema) -> Boolean
Validates schema. This method should be used to validate schemas rather than validate
due to the inconsistency of uri
format in JSON Schema standard.
By default this method is called automatically when the schema is added, so you rarely need to use it directly.
If schema doesn't have $schema
property it is validated against draft 6 meta-schema (option meta
should not be false).
If schema has $schema
property then the schema with this id (that should be previously added) is used to validate passed schema.
Errors will be available at ajv.errors
.
.getSchema(String key) -> Function<Object data>
Retrieve compiled schema previously added with addSchema
by the key passed to addSchema
or by its full reference (id). Returned validating function has schema
property with the reference to the original schema.
.removeSchema([Object schema|String key|String ref|RegExp pattern])
Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references.
Schema can be removed using:
- key passed to
addSchema
- it's full reference (id)
- RegExp that should match schema id or key (meta-schemas won't be removed)
- actual schema object that will be stable-stringified to remove schema from cache
If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared.
.addFormat(String name, String|RegExp|Function|Object format)
Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance.
Strings are converted to RegExp.
Function should return validation result as true
or false
.
If object is passed it should have properties validate
, compare
and async
:
- validate: a string, RegExp or a function as described above.
- compare: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords
formatMaximum
/formatMinimum
(defined in ajv-keywords package). It should return1
if the first value is bigger than the second value,-1
if it is smaller and0
if it is equal. - async: an optional
true
value ifvalidate
is an asynchronous function; in this case it should return a promise that resolves with a valuetrue
orfalse
. - type: an optional type of data that the format applies to. It can be
"string"
(default) or"number"
(see https://github.com/epoberezkin/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass.
Custom formats can be also added via formats
option.
.addKeyword(String keyword, Object definition)
Add custom validation keyword to Ajv instance.
Keyword should be different from all standard JSON schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance.
Keyword must start with a letter, _
or $
, and may continue with letters, numbers, _
, $
, or -
.
It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions.
Example Keywords:
"xyz-example"
: valid, and uses prefix for the xyz project to avoid name collisions."example"
: valid, but not recommended as it could collide with future versions of JSON schema etc."3-example"
: invalid as numbers are not allowed to be the first character in a keyword
Keyword definition is an object with the following properties:
- type: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types.
- validate: validating function
- compile: compiling function
- macro: macro function
- inline: compiling function that returns code (as string)
- schema: an optional
false
value used with "validate" keyword to not pass schema - metaSchema: an optional meta-schema for keyword schema
- modifying:
true
MUST be passed if keyword modifies data - valid: pass
true
/false
to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords. - $data: an optional
true
value to support $data reference as the value of custom keyword. The reference will be resolved at validation time. If the keyword has meta-schema it would be extended to allow $data and it will be used to validate the resolved value. Supporting $data reference requires that keyword has validating function (as the only option or in addition to compile, macro or inline function). - async: an optional
true
value if the validation function is asynchronous (whether it is compiled or passed in validate property); in this case it should return a promise that resolves with a valuetrue
orfalse
. This option is ignored in case of "macro" and "inline" keywords. - errors: an optional boolean indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation.
compile, macro and inline are mutually exclusive, only one should be used at a time. validate can be used separately or in addition to them to support $data reference.
Please note: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate type
keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed.
See Defining custom keywords for more details.
.getKeyword(String keyword) -> Object|Boolean
Returns custom keyword definition, true
for pre-defined keywords and false
if the keyword is unknown.
.removeKeyword(String keyword)
Removes custom or pre-defined keyword so you can redefine them.
While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results.
Please note: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use removeSchema
method and compile them again.
.errorsText([Array<Object> errors [, Object options]]) -> String
Returns the text with all errors in a String.
Options can have properties separator
(string used to separate errors, ", " by default) and dataVar
(the variable name that dataPaths are prefixed with, "data" by default).
Options
Defaults:
{
// validation and reporting options:
$data: false,
allErrors: false,
verbose: false,
jsonPointers: false,
uniqueItems: true,
unicode: true,
format: 'fast',
formats: {},
unknownFormats: true,
schemas: {},
// referenced schema options:
schemaId: undefined // recommended '$id'
missingRefs: true,
extendRefs: 'ignore', // recommended 'fail'
loadSchema: undefined, // function(uri: string): Promise {}
// options to modify validated data:
removeAdditional: false,
useDefaults: false,
coerceTypes: false,
// asynchronous validation options:
async: 'co*',
transpile: undefined, // requires ajv-async package
// advanced options:
meta: true,
validateSchema: true,
addUsedSchema: true,
inlineRefs: true,
passContext: false,
loopRequired: Infinity,
ownProperties: false,
multipleOfPrecision: false,
errorDataPath: 'object',
messages: true,
sourceCode: false,
processCode: undefined, // function (str: string): string {}
cache: new Cache,
serialize: undefined
}
Validation and reporting options
- $data: support $data references. Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See API.
- allErrors: check all rules collecting all errors. Default is to return after the first error.
- verbose: include the reference to the part of the schema (
schema
andparentSchema
) and validated data in errors (false by default). - jsonPointers: set
dataPath
propery of errors using JSON Pointers instead of JavaScript property access notation. - uniqueItems: validate
uniqueItems
keyword (true by default). - unicode: calculate correct length of strings with unicode pairs (true by default). Pass
false
to use.length
of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - format: formats validation mode ('fast' by default). Pass 'full' for more correct and slow validation or
false
not to validate formats at all. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - formats: an object with custom formats. Keys and values will be passed to
addFormat
method. - unknownFormats: handling of unknown formats. Option values:
true
(default) - if an unknown format is encountered the exception is thrown during schema compilation. Ifformat
keyword value is $data reference and it is unknown the validation will fail.[String]
- an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don't have definitions, but still fail if another unknown format is used. Ifformat
keyword value is $data reference and it is not in this array the validation will fail."ignore"
- to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won't be validated without any error message. This behaviour is required by JSON-schema specification.
- schemas: an array or object of schemas that will be added to the instance. If the order is important, pass array. In this case schemas must have IDs in them. Otherwise the object can be passed -
addSchema(value, key)
will be called for each schema in this object.
Referenced schema options
- schemaId: this option defines which keywords are used as schema URI. Option value:
"$id"
(recommended) - only use$id
keyword as schema URI (as specified in JSON Schema draft-06), ignoreid
keyword (if it is present a warning will be logged)."id"
- only useid
keyword as schema URI (as specified in JSON Schema draft-04), ignore$id
keyword (if it is present a warning will be logged).undefined
(default) - use both$id
andid
keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation.
- missingRefs: handling of missing referenced schemas. Option values:
true
(default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has propertiesmissingRef
(with hash fragment) andmissingSchema
(without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted)."ignore"
- to log error during compilation and always pass validation."fail"
- to log error and successfully compile schema but fail validation if this rule is checked.
- extendRefs: validation of other keywords when
$ref
is present in the schema. Option values:"ignore"
(default) - when$ref
is used other keywords are ignored (as per JSON Reference standard). A warning will be logged during the schema compilation."fail"
(recommended) - if other validation keywords are used together with$ref
the exception will be thrown when the schema is compiled. This option is recomended to make sure schema has no keywords that are ignored, which can be confusing.true
- validate all keywords in the schemas with$ref
(the default behaviour in versions before 5.0.0).
- loadSchema: asynchronous function that will be used to load remote schemas when
compileAsync
method is used and some reference is missing (optionmissingRefs
should NOT be 'fail' or 'ignore'). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in Asynchronous compilation.
Options to modify validated data
- removeAdditional: remove additional properties - see example in Filtering data. This option is not used if schema is added with
addMetaSchema
method. Option values:false
(default) - not to remove additional properties"all"
- all additional properties are removed, regardless ofadditionalProperties
keyword in schema (and no validation is made for them).true
- only additional properties withadditionalProperties
keyword equal tofalse
are removed."failing"
- additional properties that fail schema validation will be removed (whereadditionalProperties
keyword isfalse
or schema).
- useDefaults: replace missing properties and items with the values from corresponding
default
keywords. Default behaviour is to ignoredefault
keywords. This option is not used if schema is added withaddMetaSchema
method. See examples in Assigning defaults. Option values:false
(default) - do not use defaultstrue
- insert defaults by value (safer and slower, object literal is used)."shared"
- insert defaults by reference (faster). If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well.
- coerceTypes: change data type of data to match
type
keyword. See the example in Coercing data types and coercion rules. Option values:false
(default) - no type coercion.true
- coerce scalar data types."array"
- in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema).
Asynchronous validation options
-
async: determines how Ajv compiles asynchronous schemas (see Asynchronous validation) to functions. Option values:
"*"
/"co*"
(default) - compile to generator function ("co*" - wrapped withco.wrap
). If generators are not supported and you don't provideprocessCode
option (ortranspile
option if you use ajv-async package), the exception will be thrown when async schema is compiled."es7"
- compile to es7 async function. Unless your platform supports them you need to provideprocessCode
/transpile
option. According to compatibility table) async functions are supported by:- Firefox 52,
- Chrome 55,
- Node.js 7 (with
--harmony-async-await
), - MS Edge 13 (with flag).
undefined
/true
- auto-detect async mode. It requires ajv-async package. If transpile option is not passed ajv-async will choose the first of supported/installed async/transpile modes in this order:- "es7" (native async functions),
- "co*" (native generators with co.wrap),
- "es7"/"nodent",
- "co*"/"regenerator" during the creation of the Ajv instance.
If none of the options is available the exception will be thrown.
-
transpile: Requires ajv-async package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values:
"nodent"
- transpile with nodent. If nodent is not installed, the exception will be thrown. nodent can only transpile es7 async functions; it will enforce this mode."regenerator"
- transpile with regenerator. If regenerator is not installed, the exception will be thrown.- a function - this function should accept the code of validation function as a string and return transpiled code. This option allows you to use any other transpiler you prefer. If you are passing a function, you can simply pass it to
processCode
option without using ajv-async.
Advanced options
- meta: add meta-schema so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no
$schema
keyword. This default meta-schema MUST have$schema
keyword. - validateSchema: validate added/compiled schemas against meta-schema (true by default).
$schema
property in the schema can either be http://json-schema.org/schema or http://json-schema.org/draft-04/schema or absent (draft-4 meta-schema will be used) or can be a reference to the schema previously added withaddMetaSchema
method. Option values:true
(default) - if the validation fails, throw the exception."log"
- if the validation fails, log error.false
- skip schema validation.
- addUsedSchema: by default methods
compile
andvalidate
add schemas to the instance if they have$id
(orid
) property that doesn't start with "#". If$id
is present and it is not unique the exception will be thrown. Set this option tofalse
to skip adding schemas to the instance and the$id
uniqueness check when these methods are used. This option does not affectaddSchema
method. - inlineRefs: Affects compilation of referenced schemas. Option values:
true
(default) - the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions.false
- to not inline referenced schemas (they will be compiled as separate functions).- integer number - to limit the maximum number of keywords of the schema that will be inlined.
- passContext: pass validation context to custom keyword functions. If this option is
true
and you pass some context to the compiled validation function withvalidate.call(context, data)
, thecontext
will be available asthis
in your custom keywords. By defaultthis
is Ajv instance. - loopRequired: by default
required
keyword is compiled into a single expression (or a sequence of statements inallErrors
mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above whichrequired
keyword will be validated in a loop - smaller validation function size but also worse performance. - ownProperties: by default Ajv iterates over all enumerable object properties; when this option is
true
only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst. - multipleOfPrecision: by default
multipleOf
keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue #84). If you need to use fractional dividers set this option to some positive integer N to havemultipleOf
validated using this formula:Math.abs(Math.round(division) - division) < 1e-N
(it is slower but allows for float arithmetics deviations). - errorDataPath: set
dataPath
to point to 'object' (default) or to 'property' when validating keywordsrequired
,additionalProperties
anddependencies
. - messages: Include human-readable messages in errors.
true
by default.false
can be passed when custom messages are used (e.g. with ajv-i18n). - sourceCode: add
sourceCode
property to validating function (for debugging; this code can be different from the result of toString call). - processCode: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options:
beautify
that formatted the generated function using js-beautify. If you want to beautify the generated code passrequire('js-beautify').js_beautify
.transpile
that transpiled asynchronous validation function. You can still usetranspile
option with ajv-async package. See Asynchronous validation for more information.
- cache: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache sacjs can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods
put(key, value)
,get(key)
,del(key)
andclear()
. - serialize: an optional function to serialize schema to cache key. Pass
false
to use schema itself as a key (e.g., if WeakMap used as a cache). By default json-stable-stringify is used.
Validation errors
In case of validation failure Ajv assigns the array of errors to .errors
property of validation function (or to .errors
property of Ajv instance in case validate
or validateSchema
methods were called). In case of asynchronous validation the returned promise is rejected with the exception of the class Ajv.ValidationError
that has .errors
property.
Error objects
Each error is an object with the following properties:
- keyword: validation keyword.
- dataPath: the path to the part of the data that was validated. By default
dataPath
uses JavaScript property access notation (e.g.,".prop[1].subProp"
). When the optionjsonPointers
is true (see Options)dataPath
will be set using JSON pointer standard (e.g.,"/prop/1/subProp"
). - schemaPath: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation.
- params: the object with the additional information about error that can be used to create custom error messages (e.g., using ajv-i18n package). See below for parameters set by all keywords.
- message: the standard error message (can be excluded with option
messages
set to false). - schema: the schema of the keyword (added with
verbose
option). - parentSchema: the schema containing the keyword (added with
verbose
option) - data: the data validated by the keyword (added with
verbose
option).
Please note: propertyNames
keyword schema validation errors have an additional property propertyName
, dataPath
points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property keyword
equal to "propertyNames"
.
Error parameters
Properties of params
object in errors depend on the keyword that failed validation.
maxItems
,minItems
,maxLength
,minLength
,maxProperties
,minProperties
- propertylimit
(number, the schema of the keyword).additionalItems
- propertylimit
(the maximum number of allowed items in case whenitems
keyword is an array of schemas andadditionalItems
is false).additionalProperties
- propertyadditionalProperty
(the property not used inproperties
andpatternProperties
keywords).dependencies
- properties:property
(dependent property),missingProperty
(required missing dependency - only the first one is reported currently)deps
(required dependencies, comma separated list as a string),depsCount
(the number of required dependedncies).
format
- propertyformat
(the schema of the keyword).maximum
,minimum
- properties:limit
(number, the schema of the keyword),exclusive
(boolean, the schema ofexclusiveMaximum
orexclusiveMinimum
),comparison
(string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=")
multipleOf
- propertymultipleOf
(the schema of the keyword)pattern
- propertypattern
(the schema of the keyword)required
- propertymissingProperty
(required property that is missing).propertyNames
- propertypropertyName
(an invalid property name).patternRequired
(in ajv-keywords) - propertymissingPattern
(required pattern that did not match any property).type
- propertytype
(required type(s), a string, can be a comma-separated list)uniqueItems
- propertiesi
andj
(indices of duplicate items).enum
- propertyallowedValues
pointing to the array of values (the schema of the keyword).$ref
- propertyref
with the referenced schema URI.- custom keywords (in case keyword definition doesn't create errors) - property
keyword
(the keyword name).
Related packages
- ajv-cli - command line interface for Ajv
- ajv-i18n - internationalised error messages
- ajv-merge-patch - keywords $merge and $patch.
- ajv-keywords - several custom keywords that can be used with Ajv (typeof, instanceof, range, propertyNames)
- ajv-errors - custom error messages for Ajv
Some packages using Ajv
- webpack - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser
- jsonscript-js - the interpreter for JSONScript - scripted processing of existing endpoints and services
- osprey-method-handler - Express middleware for validating requests and responses based on a RAML method object, used in osprey - validating API proxy generated from a RAML definition
- har-validator - HTTP Archive (HAR) validator
- jsoneditor - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org
- JSON Schema Lint - a web tool to validate JSON/YAML document against a single JSON-schema http://jsonschemalint.com
- objection - SQL-friendly ORM for node.js
- table - formats data into a string table
- ripple-lib - a JavaScript API for interacting with Ripple in Node.js and the browser
- restbase - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content
- hippie-swagger - Hippie wrapper that provides end to end API testing with swagger validation
- react-form-controlled - React controlled form components with validation
- rabbitmq-schema - a schema definition module for RabbitMQ graphs and messages
- @query/schema - stream filtering with a URI-safe query syntax parsing to JSON Schema
- chai-ajv-json-schema - chai plugin to us JSON-schema with expect in mocha tests
- grunt-jsonschema-ajv - Grunt plugin for validating files against JSON Schema
- extract-text-webpack-plugin - extract text from bundle into a file
- electron-builder - a solution to package and build a ready for distribution Electron app
- addons-linter - Mozilla Add-ons Linter
- gh-pages-generator - multi-page site generator converting markdown files to GitHub pages
Tests
npm install
git submodule update --init
npm test
Contributing
All validation functions are generated using doT templates in dot folder. Templates are precompiled so doT is not a run-time dependency.
npm run build
- compiles templates to dotjs folder.
npm run watch
- automatically compiles templates when files in dot folder change
Please see Contributing guidelines
Changes history
See https://github.com/epoberezkin/ajv/releases
Please note: Changes in version 5.0.0.