# Patterns
This document describes various patterns for solving common problems, in ways that are not (yet) specified in any Frictionless Data specification. If we see increased adoption, or wide support, for any pattern, it is a prime candidate for formalising as part of a specification.
# Private properties
# Overview
Some software that implements the Frictionless Data specifications may need to store additional information on the various Frictionless Data descriptors.
For example, a data registry that provides metadata via datapackage.json
may wish to set an internal version or identifier that is system-specific, and should not be considered as part of the user-generated metadata.
Properties to store such information should be considered “private”, and by convention, the names should be prefixed by an underscore _
.
# Implementations
There are no known implementations at present.
# Specification
On any Frictionless Data descriptor, data that is not generated by the author/contributors, but is generated by software/a system handling the data, SHOULD
be considered as “private”, and be prefixed by an underscore _
.
To demonstrate, let’s take the example of a data registry that implements datapackage.json
for storing dataset metadata.
A user might upload a datapackage.json
as follows:
{
"name": "my-package",
"resources": [
{
"name": "my-resource",
"data": [ "my-resource.csv" ]
}
]
}
The registry itself may have a platform-specific version system, and increment versions on each update of the data. To store this information on the datapackage itself, the platform could save this information in a “private” _platformVersion
property as follows:
{
"name": "my-package",
"_platformVersion": 7
"resources": [
{
"name": "my-resource",
"data": [ "my-resource.csv" ]
}
]
}
Usage of “private” properties ensures a clear distinction between data stored on the descriptor that is user (author/contributor) defined, and any additional data that may be stored by a 3rd party.
# Caching of resources
# Overview
All Frictionless Data specifications allow for referencing resources via http or a local filesystem.
In the case of remote resources via http, there is always the possibility that the remote server will be unavailable, or, that the resource itself will be temporarily or permanently removed.
Applications that are concerned with the persistent storage of data described in Frictionless Data specifications can use a _cache
property that mirrors the functionality and usage of the data
property, and refers to a storage location for the data that the application can fall back to if the canonical resource is unavailable.
# Implementations
There are no known implementations of this pattern at present.
# Specification
Implementations MAY
handle a _cache
property on any descriptor that supports a data
property. In the case that the data referenced in data
is unavailable, _cache
should be used as a fallback to access the data. The handling of the data stored at _cache
is beyond the scope of the specification. Implementations might store a copy of the resources in data
at ingestion time, update at regular intervals, or any other method to keep an up-to-date, persistent copy.
Some examples of the _cache
property.
{
"name": "my-package",
"resources": [
{
"name": "my-resource",
"data": [ "http://example.com/data/csv/my-resource.csv" ],
"_cache": "my-resource.csv"
},
{
"name": "my-resource",
"data": [ "http://example.com/data/csv/my-resource.csv" ],
"_cache": "http://data.registry.com/user/files/my-resource.csv"
},
{
"name": "my-resource",
"data": [
"http://example.com/data/csv/my-resource.csv",
"http://somewhere-else.com/data/csv/resource2.csv"
],
"_cache": [
"my-resource.csv",
"resource2.csv"
]
},
{
"name": "my-resource",
"data": [ "http://example.com/data/csv/my-resource.csv" ],
"_cache": "my-resource.csv"
}
]
}
# Compression of resources
# Overview
It can be argued that applying compression to data resources can make data package publishing more cost-effective and sustainable. Compressing data resources gives publishers the benefit of reduced storage and bandwidth costs and gives consumers the benefit of shorter download times.
# Implementations
- tabulator-py (Gzip and Zip support)
- datapackage-connector (Gzip support)
- datapackage-m (Gzip support)
# Specification
All compressed resources MUST
have a path
that allows the compression
property to be inferred. If the compression can’t be inferred from the path
property (e.g. a custom file extension is used) then the compression
property MUST
be used to specify the compression.
Supported compression types:
- gz
- zip
Example of a compressed resource with implied compression:
{
"name": "data-resource-compression-example",
"path": "http://example.com/large-data-file.csv.gz",
"title": "Large Data File",
"description": "This large data file benefits from compression.",
"format": "csv",
"mediatype": "text/csv",
"encoding": "utf-8",
"bytes": 1073741824
}
Example of a compressed resource with the compression
property:
{
"name": "data-resource-compression-example",
"path": "http://example.com/large-data-file.csv.gz",
"title": "Large Data File",
"description": "This large data file benefits from compression.",
"format": "csv",
"compression" : "gz",
"mediatype": "text/csv",
"encoding": "utf-8",
"bytes": 1073741824
}
NOTE
Resource properties e.g. bytes, hash etc apply to the compressed object – not to the original uncompressed object.
# Language support
# Overview
Language support is a different concern to translation support. Language support deals with declaring the default language of a descriptor and the data it contains in the resources array. Language support makes no claim about the presence of translations when one or more languages are supported in a descriptor or in data. Via the introduction of a languages
array to any descriptor, we can declare the default language, and any other languages that SHOULD
be found in the descriptor and the data.
# Implementations
There are no known implementations of this pattern at present.
# Specification
Any Frictionless Data descriptor can declare the language configuration of its metadata and data with the languages
array.
languages
MUST
be an array, and the first item in the array is the default (non-translated) language.
If no languages
array is present, the default language is English (en
), and therefore is equivalent to:
{
"name": "my-package",
"languages": ["en"]
}
The presence of a languages array does not ensure that the metadata or the data has translations for all supported languages.
The descriptor and data sources MUST
be in the default language. The descriptor and data sources MAY
have translations for the other languages in the array, using the same language code. IF
a translation is not present, implementing code MUST
fallback to the default language string.
Example usage of languages
, implemented in the metadata of a descriptor:
{
"name": "sun-package",
"languages": ["es", "en"],
"title": "Sol"
}
# which is equivalent to
{
"name": "sun-package",
"languages": ["es", "en"],
"title": {
"": "Sol",
"en": "Sun"
}
}
Example usage of languages
implemented in the data described by a resource:
# resource descriptor
{
"name": "solar-system",
"data": [ "solar-system.csv" ]
"fields": [
...
],
"languages": ["es", "en", "he", "fr", "ar"]
}
# data source
# some languages have translations, some do not
# assumes a certain translation pattern, see the related section
id,name,[email protected],[email protected],[email protected]
1,Sol,Soleil,שמש,Sun
2,Luna,Lune,ירח,Moon
# Translation support
# Overview
Following on from a general pattern for language support, and the explicit support of metadata translations in Frictionless Data descriptors, it would be desirable to support translations in source data.
We currently have two patterns for this in discussion. Both patterns arise from real-world implementations that are not specifically tied to Frictionless Data.
One pattern suggests inline translations with the source data, reserving the @
symbol in the naming of fields to denote translations.
The other describes a pattern for storing additional translation sources, co-located with the “source” file described in a descriptor data
.
# Implementations
There are no known implementations of this pattern in the Frictionless Data core libraries at present.
# Specification
# Inline
Uses a column naming convention for accessing translations.
Tabular resource descriptors support translations using {field_name}@{lang_code}
syntax for translated field names. lang_code
MUST
be present in the languages
array that applies to the resource.
Any field with the @
symbol MUST
be a translation field for another field of data, and MUST
be parsable according to the {field_name}@{lang_code}
pattern.
If a translation field is found in the data that does not have a corresponding field
(e.g.: [email protected]
but no title
), then the translation field SHOULD
be ignored.
If a translation field is found in the data that uses a lang_code
not declared in the applied languages
array, then the translation field SHOULD
be ignored.
Translation fields MUST NOT
be described in a schema fields
array.
Translation fields MUST
match the type
, format
and constraints
of the field they translate, with a single exception: Translation fields are never required, and therefore constraints.required
is always false
for a translation field.
# Co-located translation sources
Uses a file storage convention for accessing translations.
To be contributed by @jheeffer
- Has to handle local and remote resources
- Has to be explicit about the translation key/value pattern in the translation files
# local
data/file1.csv
data/lang/file1-en.csv
data/lang/file1-es.csv
# remote
http://example/com/data/file2.csv
http://example/com/data/lang/file2-en.csv
http://example/com/data/lang/file2-es.csv
# Table Schema: Foreign Keys to Data Packages
# Overview
A foreign key is a reference where values in a field (or fields) in a Tabular Data Resource link to values in a field (or fields) in a Tabular Data Resource in the same or in another Tabular Data Package.
This pattern allows users to link values in a field (or fields) in a Tabular Data Resource to values in a field (or fields) in a Tabular Data Resource in a different Tabular Data Package.
# Specification
The foreignKeys
array MAY have a property package
. This property MUST be, either:
- a string that is a fully qualified HTTP address to a Data Package
datapackage.json
file - a data package
name
that can be resolved by a canonical data package registry
If the referenced data package has an id
that is a fully qualified HTTP address, it SHOULD be used as the package
value.
For example:
"foreignKeys": [{
"fields": ["code"],
"reference": {
"package": "https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/donation-codes/datapackage.json",
"resource": "donation-codes",
"fields": ["donation code"]
}
}]
# Data Package Version
# Specification
The Data Package version format follows the Semantic Versioning specification format: MAJOR.MINOR.PATCH
The version numbers, and the way they change, convey meaning about how the data package has been modified from one version to the next.
Given a Data Package version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible changes, e.g.
- Change the data package, resource or field
name
oridentifier
- Add, remove or re-order fields
- Change a field
type
orformat
- Change a field
constraint
to be more restrictive - Combine, split, delete or change the meaning of data that is referenced by another data resource
MINOR version when you add data or change metadata in a backwards-compatible manner, e.g.
- Add a new data resource to a data package
- Add new data to an existing data resource
- Change a field
constraint
to be less restrictive - Update a reference to another data resource
- Change data to reflect changes in referenced data
PATCH version when you make backwards-compatible fixes, e.g.
- Correct errors in existing data
- Change descriptive metadata properties
# Scenarios
- You are developing your data though public consultation. Start your initial data release at 0.1.0
- You release your data for the first time. Use version 1.0.0
- You append last months data to an existing release. Increment the MINOR version number
- You append a column to the data. Increment the MAJOR version number
- You relocate the data to a new
URL
orpath
. No change in the version number - You change a
title
,description
, or other descriptive metadata. Increment the PATCH version - You fix a data entry error by modifying a value. Increment the PATCH version
- You split a row of data in a foreign key reference table. Increment the MAJOR version number
- You update the data and schema to refer to a new version of a foreign key reference table. Increment the MINOR version number
# Data Dependencies
Consider a situation where data packages are part of a tool chain that, say, loads all of the data into an SQL db. You can then imagine a situation where one requires package A which requires package B + C.
In this case you want to specify that A depends on B and C – and that “installing” A should install B and C. This is the purpose of dataDependencies
property.
# Specification
dataDependencies
is an object. It follows same format as CommonJS Packages spec v1.1. Each dependency defines the lowest compatible MAJOR[.MINOR[.PATCH]] dependency versions (only one per MAJOR version) with which the package has been tested and is assured to work. The version may be a simple version string (see the version property for acceptable forms), or it may be an object group of dependencies which define a set of options, any one of which satisfies the dependency. The ordering of the group is significant and earlier entries have higher priority. Example:
"dataDependencies": {
"country-codes": "",
"unemployment": "2.1",
"geo-boundaries": {
"acmecorp-geo-boundaries": ["1.0", "2.0"],
"othercorp-geo-boundaries": "0.9.8",
},
}
# Implementations
None known.
# Table Schema: metadata properties
# Overview
Table Schemas need their own metadata to be stand-alone and interpreted without relying on other contextual information (Data Package metadata for example). Adding metadata to describe schemas in a structured way would help users to understand them and would increase their sharing and reuse.
Currently it is possible to add custom properties to a Table Schema, but the lack of consensus about those properties restricts common tooling and wider adoption.
# Use cases
- Documentation: generating Markdown documentation from the schema itself is a useful use case, and contextual information (description, version, authors…) needs to be retrieved.
- Cataloging: open data standardisation can be increased by improving Table Schemas shareability, for example by searching and categorising them (by keywords, countries, full-text…) in catalogs.
- Machine readability: tools like Goodtables could use catalogs to access Table Schemas in order to help users validate tabular files against existing schemas. Metadata would be needed for tools to find and read those schemas.
# Specification
This pattern introduces the following properties to the Table Schema spec (using the Frictionless Data core dictionary as much as possible):
name
: An identifier string for this schema.title
: A human-readable title for this schema.description
: A text description for this schema.keywords
: The keyword(s) that describe this schema.
Tags are useful to categorise and catalog schemas.countryCode
: The ISO 3166-1 alpha-2 code for the country where this schema is primarily used.
Since open data schemas are very country-specific, it’s useful to have this information in a structured way.homepage
: The home on the web that is related to this schema.path
: A fully qualified URL for this schema.
The direct path to the schema itself can be useful to help accessing it (i.e. machine readability).image
: An image to represent this schema.
An optional illustration can be useful for example in catalogs to differentiate schemas in a list.licenses
: The license(s) under which this schema is published.resources
: Example tabular data resource(s) validated or invalidated against this schema.
Oftentimes, schemas are shared with example resources to illustrate them, with valid or even invalid files (e.g. with constraint errors).sources
: The source(s) used to created this schema.
In some cases, schemas are created after a legal text or some draft specification in a human-readable document. In those cases, it’s useful to share them with the schema.created
: The datetime on which this schema was created.lastModified
: The datetime on which this schema was last modified.version
: A unique version number for this schema.contributors
: The contributors to this schema.
# Example schema
{
"$schema": "https://specs.frictionlessdata.io/schemas/table-schema.json",
"name": "irve",
"title": "Infrastructures de recharge de véhicules électriques",
"description": "Spécification du fichier d'échange relatif aux données concernant la localisation géographique et les caractéristiques techniques des stations et des points de recharge pour véhicules électriques",
"keywords": [
"electric vehicle",
"ev",
"charging station",
"mobility"
],
"countryCode": "FR",
"homepage": "https://github.com/etalab/schema-irve",
"path": "https://github.com/etalab/schema-irve/raw/v1.0.1/schema.json",
"image": "https://github.com/etalab/schema-irve/raw/v1.0.1/irve.png",
"licenses": [
{
"title": "Creative Commons Zero v1.0 Universal",
"name": "CC0-1.0",
"path": "https://creativecommons.org/publicdomain/zero/1.0/"
}
],
"resources": [
{
"title": "Valid resource",
"name": "exemple-valide",
"path": "https://github.com/etalab/schema-irve/raw/v1.0.1/exemple-valide.csv"
},
{
"title": "Invalid resource",
"name": "exemple-invalide",
"path": "https://github.com/etalab/schema-irve/raw/v1.0.1/exemple-invalide.csv"
}
],
"sources": [
{
"title": "Arrêté du 12 janvier 2017 relatif aux données concernant la localisation géographique et les caractéristiques techniques des stations et des points de recharge pour véhicules électriques",
"path": "https://www.legifrance.gouv.fr/eli/arrete/2017/1/12/ECFI1634257A/jo/texte"
}
],
"created": "2018-06-29",
"lastModified": "2019-05-06",
"version": "1.0.1",
"contributors": [
{
"title": "John Smith",
"email": "[email protected]",
"organization": "Etalab",
"role": "author"
},
{
"title": "Jane Doe",
"email": "[email protected]",
"organization": "Civil Society Organization X",
"role": "contributor"
}
],
"fields": [ ]
}
# Implementations
The following links are actual examples already using this pattern, but not 100 % aligned with our proposal. The point is to make the Table Schema users converge towards a common pattern, before considering changing the spec.
- @OpenDataFrance has initiated the creation of Table Schemas to standardise common French open data datasets. Their Markdown documentation is generated automatically from the schemas (using some scripts), including contextual information.
- A tool called Validata was developed, based on Goodtables, to help French open data producers follow the schemas. It uses metadata from the schemas to present them.
- @Etalab has launched schema.data.gouv.fr, an official open data schema catalog, which is specific to France. It needs additional metadata in the schemas to validate them.
- Example Table Schema from @Etalab using metadata properties.
# JSON Data Resources
# Overview
A simple format to describe a single structured JSON data resource. It includes support both for metadata such as author and title and a schema to describe the data.
# Introduction
A JSON Data Resource is a type of Data Resource specialized for describing structured JSON data.
JSON Data Resource extends Data Resource in following key ways:
- The
schema
property MUST follow the JSON Schema specification,
either as a JSON object directly under the property, or a string referencing another
JSON document containing the JSON Schema
# Examples
A minimal JSON Data Resource, referencing external JSON documents, looks as follows.
// with data and a schema accessible via the local filesystem
{
"profile": "json-data-resource",
"name": "resource-name",
"path": [ "resource-path.json" ],
"schema": "jsonschema.json"
}
// with data accessible via http
{
"profile": "json-data-resource",
"name": "resource-name",
"path": [ "http://example.com/resource-path.json" ],
"schema": "http://example.com/jsonschema.json"
}
A minimal JSON Data Resource example using the data property to inline data looks as follows.
{
"profile": "json-data-resource",
"name": "resource-name",
"data": {
"id": 1,
"first_name": "Louise"
},
"schema": {
"type": "object",
"required": [
"id"
],
"properties": {
"id": {
"type": "integer"
},
"first_name": {
"type": "string"
}
}
}
}
A comprehensive JSON Data Resource example with all required, recommended and optional properties looks as follows.
{
"profile": "json-data-resource",
"name": "solar-system",
"path": "http://example.com/solar-system.json",
"title": "The Solar System",
"description": "My favourite data about the solar system.",
"format": "json",
"mediatype": "application/json",
"encoding": "utf-8",
"bytes": 1,
"hash": "",
"schema": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": [
"id"
],
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
"description": {
"type": "string"
}
}
},
"sources": [{
"title": "The Solar System - 2001",
"path": "http://example.com/solar-system-2001.json",
"email": ""
}],
"licenses": [{
"name": "CC-BY-4.0",
"title": "Creative Commons Attribution 4.0",
"path": "https://creativecommons.org/licenses/by/4.0/"
}]
}
# Specification
A JSON Data Resource MUST be a Data Resource, that is it MUST conform to the Data Resource specification.
In addition:
- The Data Resource
schema
property MUST follow the JSON Schema specification,
either as a JSON object directly under the property, or a string referencing another
JSON document containing the JSON Schema
- There
MUST
be aprofile
property with the valuejson-data-resource
- The data the Data Resource describes MUST, if non-inline, be a JSON file
# JSON file requirements
When "format": "json"
, files must strictly follow the JSON specification. Some implementations MAY
support "format": "jsonc"
, allowing for non-standard single line and block comments (//
and /* */
respectively).
# Implementations
None known.
# Describing Data Package Catalogs using the Data Package Format
# Overview
There are scenarios where one needs to describe a collection of data packages, such as when building an online registry, or when building a pipeline that ingests multiple datasets.
In these scenarios, the collection can be described using a “Catalog”, where each dataset is represented as a single resource which has:
{
"profile": "data-package",
"format": "json"
}
# Specification
The Data Package Catalog builds directly on the Data Package specification. Thus a Data Package Catalog MUST
be a Data Package and conform to the Data Package specification.
The Data Package Catalog has the following requirements over and above those imposed by Data Package:
- There
MUST
be aprofile
property with the valuedata-package-catalog
, or aprofile
that extends it - Each resource
MUST
also be a Data Package
# Examples
A generic package catalog:
{
"profile": "data-package-catalog",
"name": "climate-change-packages",
"resources": [
{
"profile": "json-data-package",
"format": "json",
"name": "beacon-network-description",
"path": "https://http://beacon.berkeley.edu/hypothetical_deployment_description.json"
},
{
"profile": "tabular-data-package",
"format": "json",
"path": "https://pkgstore.datahub.io/core/co2-ppm/10/datapackage.json"
},
{
"profile": "tabular-data-package",
"name": "co2-fossil-global",
"format": "json",
"path": "https://pkgstore.datahub.io/core/co2-fossil-global/11/datapackage.json"
}
]
}
A minimal tabular data catalog:
{
"profile": "tabular-data-package-catalog",
"name": "datahub-climate-change-packages",
"resources": [
{
"path": "https://pkgstore.datahub.io/core/co2-ppm/10/datapackage.json"
},
{
"name": "co2-fossil-global",
"path": "https://pkgstore.datahub.io/core/co2-fossil-global/11/datapackage.json"
}
]
}
Data packages can also be declared inline in the data catalog:
{
"profile": "tabular-data-package-catalog",
"name": "my-data-catalog",
"resources": [
{
"profile": "tabular-data-package",
"name": "my-dataset",
// here we list the data files in this dataset
"resources": [
{
"profile": "tabular-data-resource",
"name": "resource-name",
"data": [
{
"id": 1,
"first_name": "Louise"
},
{
"id": 2,
"first_name": "Julia"
}
],
"schema": {
"fields": [
{
"name": "id",
"type": "integer"
},
{
"name": "first_name",
"type": "string"
}
],
"primaryKey": "id"
}
}
]
}
]
}
# Implementations
None known.
# Table Schema: Unique constraints
# Overview
A primaryKey
uniquely identifies each row in a table. Per SQL standards, it
cannot contain null
values. This pattern implements the SQL UNIQUE constraint
by introducing a uniqueKeys
array, defining one or more row uniqueness
constraints which do support null
values. An additional uniqueNulls
property
controls how null
values are to be treated in unique constraints.
# Specification
# uniqueKeys
(add)
The uniqueKeys
property, if present, MUST
be an array. Each entry
(uniqueKey
) in the array MUST
be a string or array (structured as per
primaryKey
) specifying the resource field or fields required to be unique for
each row in the table.
# uniqueNulls
(add)
The uniqueNulls
property is a boolean that dictates how null
values should
be treated by all unique constraints set on a resource.
- If
true
(the default),null
values are treated as unique (per most SQL
databases). By this definition,1, null, null
is UNIQUE. - If
false
,null
values are treated like any other value (per Microsoft SQL
Server, Python pandas, R data.frame, Google Sheets). By this definition,1, null, null
is NOT UNIQUE.
# foreignKeys
(edit)
Per SQL standards, null
values are permitted in both the local and reference
keys of a foreign key. However, reference keys MUST
be unique and are
therefore equivalent to a uniqueKey
set on the reference resource (the meaning
of which is determined by the reference uniqueNulls
).
Furthermore, per SQL standards, the local key MAY
contain keys with field
values not present in the reference key if and only if at least one of the
fields is locally null
. For example, (1, null)
is permitted locally even if
the reference is [(2, 1), (3, 1)]
. This behavior is the same regardless of the
value of uniqueNulls
.
# Examples
# null
in unique constraints
a | b | c | d |
---|---|---|---|
1 | 1 | 1 | 1 |
2 | 2 | null | 2 |
3 | 2 | null | null |
The above table meets the following primary key and two unique key constraints:
{
"primaryKey": ["a"],
"uniqueKeys": [
["b", "c"],
["c", "d"]
],
"uniqueNulls": true
}
The primary key (a)
only contains unique, non-null
values. In contrast, the
unique keys can contain null
values. Although unique key (b, c)
contains two
identical keys (2, null)
, this is permitted because uniqueNulls: true
specifies that null
values are unique. This behavior is consistent with the
UNIQUE constraint of PostgreSQL and most other SQL implementations, as
illustrated by this
dbfiddle.
The same keys would be considered duplicates if uniqueNulls: false
, consistent
with the UNIQUE constraint of Microsoft SQL Server, as illustrated by this
dbfiddle.
# Setting unique constraints
For a given resource, unique constraints can be set for one field using a
field’s unique
constraint, for one or multiple fields using a uniqueKey
, and
for one or multiple fields using a foreignKey
referencing the resource. Each
of the following examples set a unique constraint on field a
:
Field constraints
{
"fields": [
{
"name": "a",
"constraints": {
"unique": true
}
}
]
}
uniqueKeys
{
"uniqueKeys": [
"a"
]
}
foreignKeys
{
"foreignKeys": [
{
"fields": "a",
"reference": {
"resource": "",
"fields": "a"
}
}
]
}
# Implementations
None known.
# Describing files inside a compressed file such as Zip
# Overview
Some datasets need to contain a Zip file (or tar, other formats) containing a
set of files.
This might happen for practical reasons (datasets containing thousands of files)
or for technical limitations (for example, currently Zenodo doesn’t support subdirectories and
datasets might need subdirectory structures to be useful).
# Implementations
There are no known implementations at present.
# Specification
The resources
in a data-package
can contain “recursive resources”: identifying
a new resource.
# Example
{
"profile": "data-package",
"resources": [
{
"path": "https://zenodo.org/record/3247384/files/Sea-Bird_Processed_Data.zip",
"format": "zip",
"mediatype": "application/zip",
"bytes": "294294242424",
"hash": "a27063c614c183b502e5c03bd9c8931b",
"resources": [
{
"path": "file_name.csv",
"format": "csv",
"mediatype": "text/csv",
"bytes": 242421,
"hash": "0300048878bb9b5804a1f62869d296bc",
"profile": "tabular-data-resource",
"schema": "tableschema.json"
},
{
"path": "directory/file_name2.csv",
"format": "csv",
"mediatype": "text/csv",
"bytes": 2424213,
"hash": "ff9435e0ee350efbe8a4a8779a47caaa",
"profile": "tabular-data-resource",
"schema": "tableschema.json"
}
]
}
]
}
For a .tar.gz
it would be the same changing the "format"
and the
"mediatype"
.
# Types of files
Support for Zip
and tar.gz
might be enough: hopefully everything can be
re-packaged using these formats.
To keep the implementation and testing testing: only one recursive level is
possible. A resource
can list resources
inside (like in the example). But
the inner resources cannot contain resources again.
# Missing values per field
# Overview
Characters representing missing values in a table can be defined for all fields in a Tabular Data Resource using the missingValues
property in a Table Schema. Values that match the missingValues
are treated as null
.
The Missing values per field pattern allows different missing values to be specified for each field in a Table Schema. If not specified, each field inherits from values assigned to missingValues
at the Tabular Data Resource level.
For example, this data…
item | description | price |
---|---|---|
1 | Apple | 0.99 |
tba | Banana | -1 |
3 | n/a | 1.20 |
…using this Table Schema…
"schema":{
"fields": [
{
"name": "item",
"title": "An inventory item number",
"type": "integer"
},
{
"name": "description",
"title": "item description",
"type": "string",
"missingValues": [ "n/a"]
},
{
"name": "price",
"title": "cost price",
"type": "number",
"missingValues": [ "-1"]
}
],
"missingValues": [ "tba", "" ]
}
…would be interpreted as…
item | description | price |
---|---|---|
1 | Apple | 0.99 |
null | Banana | null |
3 | null | 1.20 |
# Specification
A field MAY have a missingValues
property that MUST be an array
where each entry is a string
. If not specified, each field inherits from the values assigned to missingValues
at the Tabular Data Resource level.
# Implementations
None known.