# Patterns
This document describes various patterns for solving common problems, in ways that are not (yet) specified in any Frictionless Data specification. If we see increased adoption, or wide support, for any pattern, it is a prime candidate for formalising as part of a specification.
# Table of Contents
- Private properties
- Caching of resources
- Compression of resources
- Language support
- Translation support
- Table Schema: Foreign Keys to Data Packages
- Data Package Version
- Data Dependencies
- Table Schema: metadata properties
- JSON Data Resources
- Describing Data Package Catalogs using the Data Package Format
- Table Schema: Unique constraints
- Describing files inside a compressed file such as Zip
- Missing values per field
- Table Schema: Enum labels and ordering
- Table Schema: Relationship between Fields
# Private properties
# Overview
Some software that implements the Frictionless Data specifications may need to store additional information on the various Frictionless Data descriptors.
For example, a data registry that provides metadata via datapackage.json
may wish to set an internal version or identifier that is system-specific, and should not be considered as part of the user-generated metadata.
Properties to store such information should be considered “private”, and by convention, the names should be prefixed by an underscore _
.
# Implementations
There are no known implementations at present.
# Specification
On any Frictionless Data descriptor, data that is not generated by the author/contributors, but is generated by software/a system handling the data, SHOULD
be considered as “private”, and be prefixed by an underscore _
.
To demonstrate, let’s take the example of a data registry that implements datapackage.json
for storing dataset metadata.
A user might upload a datapackage.json
as follows:
{
"name": "my-package",
"resources": [
{
"name": "my-resource",
"data": [ "my-resource.csv" ]
}
]
}
The registry itself may have a platform-specific version system, and increment versions on each update of the data. To store this information on the datapackage itself, the platform could save this information in a “private” _platformVersion
property as follows:
{
"name": "my-package",
"_platformVersion": 7
"resources": [
{
"name": "my-resource",
"data": [ "my-resource.csv" ]
}
]
}
Usage of “private” properties ensures a clear distinction between data stored on the descriptor that is user (author/contributor) defined, and any additional data that may be stored by a 3rd party.
# Caching of resources
# Overview
All Frictionless Data specifications allow for referencing resources via http or a local filesystem.
In the case of remote resources via http, there is always the possibility that the remote server will be unavailable, or, that the resource itself will be temporarily or permanently removed.
Applications that are concerned with the persistent storage of data described in Frictionless Data specifications can use a _cache
property that mirrors the functionality and usage of the data
property, and refers to a storage location for the data that the application can fall back to if the canonical resource is unavailable.
# Implementations
There are no known implementations of this pattern at present.
# Specification
Implementations MAY
handle a _cache
property on any descriptor that supports a data
property. In the case that the data referenced in data
is unavailable, _cache
should be used as a fallback to access the data. The handling of the data stored at _cache
is beyond the scope of the specification. Implementations might store a copy of the resources in data
at ingestion time, update at regular intervals, or any other method to keep an up-to-date, persistent copy.
Some examples of the _cache
property.
{
"name": "my-package",
"resources": [
{
"name": "my-resource",
"data": [ "http://example.com/data/csv/my-resource.csv" ],
"_cache": "my-resource.csv"
},
{
"name": "my-resource",
"data": [ "http://example.com/data/csv/my-resource.csv" ],
"_cache": "http://data.registry.com/user/files/my-resource.csv"
},
{
"name": "my-resource",
"data": [
"http://example.com/data/csv/my-resource.csv",
"http://somewhere-else.com/data/csv/resource2.csv"
],
"_cache": [
"my-resource.csv",
"resource2.csv"
]
},
{
"name": "my-resource",
"data": [ "http://example.com/data/csv/my-resource.csv" ],
"_cache": "my-resource.csv"
}
]
}
# Compression of resources
# Overview
It can be argued that applying compression to data resources can make data package publishing more cost-effective and sustainable. Compressing data resources gives publishers the benefit of reduced storage and bandwidth costs and gives consumers the benefit of shorter download times.
# Implementations
- tabulator-py (Gzip and Zip support) (opens new window)
- datapackage-connector (Gzip support) (opens new window)
- datapackage-m (Gzip support) (opens new window)
# Specification
All compressed resources MUST
have a path
that allows the compression
property to be inferred. If the compression can’t be inferred from the path
property (e.g. a custom file extension is used) then the compression
property MUST
be used to specify the compression.
Supported compression types:
- gz
- zip
Example of a compressed resource with implied compression:
{
"name": "data-resource-compression-example",
"path": "http://example.com/large-data-file.csv.gz",
"title": "Large Data File",
"description": "This large data file benefits from compression.",
"format": "csv",
"mediatype": "text/csv",
"encoding": "utf-8",
"bytes": 1073741824
}
Example of a compressed resource with the compression
property:
{
"name": "data-resource-compression-example",
"path": "http://example.com/large-data-file.csv.gz",
"title": "Large Data File",
"description": "This large data file benefits from compression.",
"format": "csv",
"compression" : "gz",
"mediatype": "text/csv",
"encoding": "utf-8",
"bytes": 1073741824
}
NOTE
Resource properties e.g. bytes, hash etc apply to the compressed object – not to the original uncompressed object.
# Language support
# Overview
Language support is a different concern to translation support. Language support deals with declaring the default language of a descriptor and the data it contains in the resources array. Language support makes no claim about the presence of translations when one or more languages are supported in a descriptor or in data. Via the introduction of a languages
array to any descriptor, we can declare the default language, and any other languages that SHOULD
be found in the descriptor and the data.
# Implementations
There are no known implementations of this pattern at present.
# Specification
Any Frictionless Data descriptor can declare the language configuration of its metadata and data with the languages
array.
languages
MUST
be an array, and the first item in the array is the default (non-translated) language.
If no languages
array is present, the default language is English (en
), and therefore is equivalent to:
{
"name": "my-package",
"languages": ["en"]
}
The presence of a languages array does not ensure that the metadata or the data has translations for all supported languages.
The descriptor and data sources MUST
be in the default language. The descriptor and data sources MAY
have translations for the other languages in the array, using the same language code. IF
a translation is not present, implementing code MUST
fallback to the default language string.
Example usage of languages
, implemented in the metadata of a descriptor:
{
"name": "sun-package",
"languages": ["es", "en"],
"title": "Sol"
}
# which is equivalent to
{
"name": "sun-package",
"languages": ["es", "en"],
"title": {
"": "Sol",
"en": "Sun"
}
}
Example usage of languages
implemented in the data described by a resource:
# resource descriptor
{
"name": "solar-system",
"data": [ "solar-system.csv" ]
"fields": [
...
],
"languages": ["es", "en", "he", "fr", "ar"]
}
# data source
# some languages have translations, some do not
# assumes a certain translation pattern, see the related section
id,name,name@fr,name@he,name@en
1,Sol,Soleil,שמש,Sun
2,Luna,Lune,ירח,Moon
# Translation support
# Overview
Following on from a general pattern for language support, and the explicit support of metadata translations in Frictionless Data descriptors, it would be desirable to support translations in source data.
We currently have two patterns for this in discussion. Both patterns arise from real-world implementations that are not specifically tied to Frictionless Data.
One pattern suggests inline translations with the source data, reserving the @
symbol in the naming of fields to denote translations.
The other describes a pattern for storing additional translation sources, co-located with the “source” file described in a descriptor data
.
# Implementations
There are no known implementations of this pattern in the Frictionless Data core libraries at present.
# Specification
# Inline
Uses a column naming convention for accessing translations.
Tabular resource descriptors support translations using {field_name}@{lang_code}
syntax for translated field names. lang_code
MUST
be present in the languages
array that applies to the resource.
Any field with the @
symbol MUST
be a translation field for another field of data, and MUST
be parsable according to the {field_name}@{lang_code}
pattern.
If a translation field is found in the data that does not have a corresponding field
(e.g.: title@es
but no title
), then the translation field SHOULD
be ignored.
If a translation field is found in the data that uses a lang_code
not declared in the applied languages
array, then the translation field SHOULD
be ignored.
Translation fields MUST NOT
be described in a schema fields
array.
Translation fields MUST
match the type
, format
and constraints
of the field they translate, with a single exception: Translation fields are never required, and therefore constraints.required
is always false
for a translation field.
# Co-located translation sources
Uses a file storage convention for accessing translations.
To be contributed by @jheeffer
- Has to handle local and remote resources
- Has to be explicit about the translation key/value pattern in the translation files
# local
data/file1.csv
data/lang/file1-en.csv
data/lang/file1-es.csv
# remote
http://example/com/data/file2.csv
http://example/com/data/lang/file2-en.csv
http://example/com/data/lang/file2-es.csv
# Table Schema: Foreign Keys to Data Packages
# Overview
A foreign key is a reference where values in a field (or fields) in a Tabular Data Resource link to values in a field (or fields) in a Tabular Data Resource in the same or in another Tabular Data Package.
This pattern allows users to link values in a field (or fields) in a Tabular Data Resource to values in a field (or fields) in a Tabular Data Resource in a different Tabular Data Package.
# Specification
The foreignKeys
array MAY have a property package
. This property MUST be, either:
- a string that is a fully qualified HTTP address to a Data Package
datapackage.json
file - a data package
name
that can be resolved by a canonical data package registry
If the referenced data package has an id
that is a fully qualified HTTP address, it SHOULD be used as the package
value.
For example:
"foreignKeys": [{
"fields": ["code"],
"reference": {
"package": "https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/donation-codes/datapackage.json",
"resource": "donation-codes",
"fields": ["donation code"]
}
}]
# Data Package Version
# Specification
The Data Package version format follows the Semantic Versioning (opens new window) specification format: MAJOR.MINOR.PATCH
The version numbers, and the way they change, convey meaning about how the data package has been modified from one version to the next.
Given a Data Package version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible changes, e.g.
- Change the data package, resource or field
name
oridentifier
- Add, remove or re-order fields
- Change a field
type
orformat
- Change a field
constraint
to be more restrictive - Combine, split, delete or change the meaning of data that is referenced by another data resource
MINOR version when you add data or change metadata in a backwards-compatible manner, e.g.
- Add a new data resource to a data package
- Add new data to an existing data resource
- Change a field
constraint
to be less restrictive - Update a reference to another data resource
- Change data to reflect changes in referenced data
PATCH version when you make backwards-compatible fixes, e.g.
- Correct errors in existing data
- Change descriptive metadata properties
# Scenarios
- You are developing your data though public consultation. Start your initial data release at 0.1.0
- You release your data for the first time. Use version 1.0.0
- You append last months data to an existing release. Increment the MINOR version number
- You append a column to the data. Increment the MAJOR version number
- You relocate the data to a new
URL
orpath
. No change in the version number - You change a
title
,description
, or other descriptive metadata. Increment the PATCH version - You fix a data entry error by modifying a value. Increment the PATCH version
- You split a row of data in a foreign key reference table. Increment the MAJOR version number
- You update the data and schema to refer to a new version of a foreign key reference table. Increment the MINOR version number
# Data Dependencies
Consider a situation where data packages are part of a tool chain that, say, loads all of the data into an SQL db. You can then imagine a situation where one requires package A which requires package B + C.
In this case you want to specify that A depends on B and C – and that “installing” A should install B and C. This is the purpose of dataDependencies
property.
# Specification
dataDependencies
is an object. It follows same format as CommonJS Packages spec v1.1. Each dependency defines the lowest compatible MAJOR[.MINOR[.PATCH]] dependency versions (only one per MAJOR version) with which the package has been tested and is assured to work. The version may be a simple version string (see the version property for acceptable forms), or it may be an object group of dependencies which define a set of options, any one of which satisfies the dependency. The ordering of the group is significant and earlier entries have higher priority. Example:
"dataDependencies": {
"country-codes": "",
"unemployment": "2.1",
"geo-boundaries": {
"acmecorp-geo-boundaries": ["1.0", "2.0"],
"othercorp-geo-boundaries": "0.9.8",
},
}
# Implementations
None known.
# Table Schema: metadata properties
# Overview
Table Schemas need their own metadata to be stand-alone and interpreted without relying on other contextual information (Data Package metadata for example). Adding metadata to describe schemas in a structured way would help users to understand them and would increase their sharing and reuse.
Currently it is possible to add custom properties to a Table Schema, but the lack of consensus about those properties restricts common tooling and wider adoption.
# Use cases
- Documentation: generating Markdown documentation from the schema itself is a useful use case, and contextual information (description, version, authors…) needs to be retrieved.
- Cataloging: open data standardisation can be increased by improving Table Schemas shareability, for example by searching and categorising them (by keywords, countries, full-text…) in catalogs.
- Machine readability: tools like Goodtables could use catalogs to access Table Schemas in order to help users validate tabular files against existing schemas. Metadata would be needed for tools to find and read those schemas.
# Specification
This pattern introduces the following properties to the Table Schema spec (using the Frictionless Data core dictionary (opens new window) as much as possible):
name
: An identifier string for this schema.title
: A human-readable title for this schema.description
: A text description for this schema.keywords
: The keyword(s) that describe this schema.
Tags are useful to categorise and catalog schemas.countryCode
: The ISO 3166-1 alpha-2 code for the country where this schema is primarily used.
Since open data schemas are very country-specific, it’s useful to have this information in a structured way.homepage
: The home on the web that is related to this schema.path
: A fully qualified URL for this schema.
The direct path to the schema itself can be useful to help accessing it (i.e. machine readability).image
: An image to represent this schema.
An optional illustration can be useful for example in catalogs to differentiate schemas in a list.licenses
: The license(s) under which this schema is published.resources
: Example tabular data resource(s) validated or invalidated against this schema.
Oftentimes, schemas are shared with example resources to illustrate them, with valid or even invalid files (e.g. with constraint errors).sources
: The source(s) used to created this schema.
In some cases, schemas are created after a legal text or some draft specification in a human-readable document. In those cases, it’s useful to share them with the schema.created
: The datetime on which this schema was created.lastModified
: The datetime on which this schema was last modified.version
: A unique version number for this schema.contributors
: The contributors to this schema.
# Example schema
{
"$schema": "https://specs.frictionlessdata.io/schemas/table-schema.json",
"name": "irve",
"title": "Infrastructures de recharge de véhicules électriques",
"description": "Spécification du fichier d'échange relatif aux données concernant la localisation géographique et les caractéristiques techniques des stations et des points de recharge pour véhicules électriques",
"keywords": [
"electric vehicle",
"ev",
"charging station",
"mobility"
],
"countryCode": "FR",
"homepage": "https://github.com/etalab/schema-irve",
"path": "https://github.com/etalab/schema-irve/raw/v1.0.1/schema.json",
"image": "https://github.com/etalab/schema-irve/raw/v1.0.1/irve.png",
"licenses": [
{
"title": "Creative Commons Zero v1.0 Universal",
"name": "CC0-1.0",
"path": "https://creativecommons.org/publicdomain/zero/1.0/"
}
],
"resources": [
{
"title": "Valid resource",
"name": "exemple-valide",
"path": "https://github.com/etalab/schema-irve/raw/v1.0.1/exemple-valide.csv"
},
{
"title": "Invalid resource",
"name": "exemple-invalide",
"path": "https://github.com/etalab/schema-irve/raw/v1.0.1/exemple-invalide.csv"
}
],
"sources": [
{
"title": "Arrêté du 12 janvier 2017 relatif aux données concernant la localisation géographique et les caractéristiques techniques des stations et des points de recharge pour véhicules électriques",
"path": "https://www.legifrance.gouv.fr/eli/arrete/2017/1/12/ECFI1634257A/jo/texte"
}
],
"created": "2018-06-29",
"lastModified": "2019-05-06",
"version": "1.0.1",
"contributors": [
{
"title": "John Smith",
"email": "[email protected]",
"organization": "Etalab",
"role": "author"
},
{
"title": "Jane Doe",
"email": "[email protected]",
"organization": "Civil Society Organization X",
"role": "contributor"
}
],
"fields": [ ]
}
# Implementations
The following links are actual examples already using this pattern, but not 100 % aligned with our proposal. The point is to make the Table Schema users converge towards a common pattern, before considering changing the spec.
- @OpenDataFrance has initiated the creation of Table Schemas (opens new window) to standardise common French open data datasets. Their Markdown documentation (opens new window) is generated automatically from the schemas (using some scripts (opens new window)), including contextual information.
- A tool called Validata (opens new window) was developed, based on Goodtables, to help French open data producers follow the schemas. It uses metadata from the schemas to present them.
- @Etalab has launched schema.data.gouv.fr (opens new window), an official open data schema catalog, which is specific to France. It needs additional metadata in the schemas to validate them (opens new window).
- Example Table Schema (opens new window) from @Etalab using metadata properties.
# JSON Data Resources
# Overview
A simple format to describe a single structured JSON data resource. It includes support both for metadata such as author and title and a schema (opens new window) to describe the data.
# Introduction
A JSON Data Resource is a type of Data Resource specialized for describing structured JSON data.
JSON Data Resource extends Data Resource in following key ways:
- The
schema
property MUST follow the JSON Schema (opens new window) specification,
either as a JSON object directly under the property, or a string referencing another
JSON document containing the JSON Schema
# Examples
A minimal JSON Data Resource, referencing external JSON documents, looks as follows.
// with data and a schema accessible via the local filesystem
{
"profile": "json-data-resource",
"name": "resource-name",
"path": [ "resource-path.json" ],
"schema": "jsonschema.json"
}
// with data accessible via http
{
"profile": "json-data-resource",
"name": "resource-name",
"path": [ "http://example.com/resource-path.json" ],
"schema": "http://example.com/jsonschema.json"
}
A minimal JSON Data Resource example using the data property to inline data looks as follows.
{
"profile": "json-data-resource",
"name": "resource-name",
"data": {
"id": 1,
"first_name": "Louise"
},
"schema": {
"type": "object",
"required": [
"id"
],
"properties": {
"id": {
"type": "integer"
},
"first_name": {
"type": "string"
}
}
}
}
A comprehensive JSON Data Resource example with all required, recommended and optional properties looks as follows.
{
"profile": "json-data-resource",
"name": "solar-system",
"path": "http://example.com/solar-system.json",
"title": "The Solar System",
"description": "My favourite data about the solar system.",
"format": "json",
"mediatype": "application/json",
"encoding": "utf-8",
"bytes": 1,
"hash": "",
"schema": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": [
"id"
],
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
"description": {
"type": "string"
}
}
},
"sources": [{
"title": "The Solar System - 2001",
"path": "http://example.com/solar-system-2001.json",
"email": ""
}],
"licenses": [{
"name": "CC-BY-4.0",
"title": "Creative Commons Attribution 4.0",
"path": "https://creativecommons.org/licenses/by/4.0/"
}]
}
# Specification
A JSON Data Resource MUST be a Data Resource, that is it MUST conform to the Data Resource specification.
In addition:
- The Data Resource
schema
property MUST follow the JSON Schema (opens new window) specification,
either as a JSON object directly under the property, or a string referencing another
JSON document containing the JSON Schema
- There
MUST
be aprofile
property with the valuejson-data-resource
- The data the Data Resource describes MUST, if non-inline, be a JSON file
# JSON file requirements
When "format": "json"
, files must strictly follow the JSON specification (opens new window). Some implementations MAY
support "format": "jsonc"
, allowing for non-standard single line and block comments (//
and /* */
respectively).
# Implementations
None known.
# Describing Data Package Catalogs using the Data Package Format
# Overview
There are scenarios where one needs to describe a collection of data packages, such as when building an online registry, or when building a pipeline that ingests multiple datasets.
In these scenarios, the collection can be described using a “Catalog”, where each dataset is represented as a single resource which has:
{
"profile": "data-package",
"format": "json"
}
# Specification
The Data Package Catalog builds directly on the Data Package specification. Thus a Data Package Catalog MUST
be a Data Package and conform to the Data Package specification.
The Data Package Catalog has the following requirements over and above those imposed by Data Package:
- There
MUST
be aprofile
property with the valuedata-package-catalog
, or aprofile
that extends it - Each resource
MUST
also be a Data Package
# Examples
A generic package catalog:
{
"profile": "data-package-catalog",
"name": "climate-change-packages",
"resources": [
{
"profile": "json-data-package",
"format": "json",
"name": "beacon-network-description",
"path": "https://http://beacon.berkeley.edu/hypothetical_deployment_description.json"
},
{
"profile": "tabular-data-package",
"format": "json",
"path": "https://pkgstore.datahub.io/core/co2-ppm/10/datapackage.json"
},
{
"profile": "tabular-data-package",
"name": "co2-fossil-global",
"format": "json",
"path": "https://pkgstore.datahub.io/core/co2-fossil-global/11/datapackage.json"
}
]
}
A minimal tabular data catalog:
{
"profile": "tabular-data-package-catalog",
"name": "datahub-climate-change-packages",
"resources": [
{
"path": "https://pkgstore.datahub.io/core/co2-ppm/10/datapackage.json"
},
{
"name": "co2-fossil-global",
"path": "https://pkgstore.datahub.io/core/co2-fossil-global/11/datapackage.json"
}
]
}
Data packages can also be declared inline in the data catalog:
{
"profile": "tabular-data-package-catalog",
"name": "my-data-catalog",
"resources": [
{
"profile": "tabular-data-package",
"name": "my-dataset",
// here we list the data files in this dataset
"resources": [
{
"profile": "tabular-data-resource",
"name": "resource-name",
"data": [
{
"id": 1,
"first_name": "Louise"
},
{
"id": 2,
"first_name": "Julia"
}
],
"schema": {
"fields": [
{
"name": "id",
"type": "integer"
},
{
"name": "first_name",
"type": "string"
}
],
"primaryKey": "id"
}
}
]
}
]
}
# Implementations
None known.
# Table Schema: Unique constraints
# Overview
A primaryKey
uniquely identifies each row in a table. Per SQL standards, it
cannot contain null
values. This pattern implements the SQL UNIQUE constraint
by introducing a uniqueKeys
array, defining one or more row uniqueness
constraints which do support null
values. An additional uniqueNulls
property
controls how null
values are to be treated in unique constraints.
# Specification
# uniqueKeys
(add)
The uniqueKeys
property, if present, MUST
be an array. Each entry
(uniqueKey
) in the array MUST
be a string or array (structured as per
primaryKey
) specifying the resource field or fields required to be unique for
each row in the table.
# uniqueNulls
(add)
The uniqueNulls
property is a boolean that dictates how null
values should
be treated by all unique constraints set on a resource.
- If
true
(the default),null
values are treated as unique (per most SQL
databases). By this definition,1, null, null
is UNIQUE. - If
false
,null
values are treated like any other value (per Microsoft SQL
Server, Python pandas, R data.frame, Google Sheets). By this definition,1, null, null
is NOT UNIQUE.
# foreignKeys
(edit)
Per SQL standards, null
values are permitted in both the local and reference
keys of a foreign key. However, reference keys MUST
be unique and are
therefore equivalent to a uniqueKey
set on the reference resource (the meaning
of which is determined by the reference uniqueNulls
).
Furthermore, per SQL standards, the local key MAY
contain keys with field
values not present in the reference key if and only if at least one of the
fields is locally null
. For example, (1, null)
is permitted locally even if
the reference is [(2, 1), (3, 1)]
. This behavior is the same regardless of the
value of uniqueNulls
.
# Examples
# null
in unique constraints
a | b | c | d |
---|---|---|---|
1 | 1 | 1 | 1 |
2 | 2 | null | 2 |
3 | 2 | null | null |
The above table meets the following primary key and two unique key constraints:
{
"primaryKey": ["a"],
"uniqueKeys": [
["b", "c"],
["c", "d"]
],
"uniqueNulls": true
}
The primary key (a)
only contains unique, non-null
values. In contrast, the
unique keys can contain null
values. Although unique key (b, c)
contains two
identical keys (2, null)
, this is permitted because uniqueNulls: true
specifies that null
values are unique. This behavior is consistent with the
UNIQUE constraint of PostgreSQL and most other SQL implementations, as
illustrated by this
dbfiddle (opens new window).
The same keys would be considered duplicates if uniqueNulls: false
, consistent
with the UNIQUE constraint of Microsoft SQL Server, as illustrated by this
dbfiddle (opens new window).
# Setting unique constraints
For a given resource, unique constraints can be set for one field using a
field’s unique
constraint, for one or multiple fields using a uniqueKey
, and
for one or multiple fields using a foreignKey
referencing the resource. Each
of the following examples set a unique constraint on field a
:
Field constraints
{
"fields": [
{
"name": "a",
"constraints": {
"unique": true
}
}
]
}
uniqueKeys
{
"uniqueKeys": [
"a"
]
}
foreignKeys
{
"foreignKeys": [
{
"fields": "a",
"reference": {
"resource": "",
"fields": "a"
}
}
]
}
# Implementations
None known.
# Describing files inside a compressed file such as Zip
# Overview
Some datasets need to contain a Zip file (or tar, other formats) containing a
set of files.
This might happen for practical reasons (datasets containing thousands of files)
or for technical limitations (for example, currently Zenodo doesn’t support subdirectories and
datasets might need subdirectory structures to be useful).
# Implementations
There are no known implementations at present.
# Specification
The resources
in a data-package
can contain “recursive resources”: identifying
a new resource.
# Example
{
"profile": "data-package",
"resources": [
{
"path": "https://zenodo.org/record/3247384/files/Sea-Bird_Processed_Data.zip",
"format": "zip",
"mediatype": "application/zip",
"bytes": "294294242424",
"hash": "a27063c614c183b502e5c03bd9c8931b",
"resources": [
{
"path": "file_name.csv",
"format": "csv",
"mediatype": "text/csv",
"bytes": 242421,
"hash": "0300048878bb9b5804a1f62869d296bc",
"profile": "tabular-data-resource",
"schema": "tableschema.json"
},
{
"path": "directory/file_name2.csv",
"format": "csv",
"mediatype": "text/csv",
"bytes": 2424213,
"hash": "ff9435e0ee350efbe8a4a8779a47caaa",
"profile": "tabular-data-resource",
"schema": "tableschema.json"
}
]
}
]
}
For a .tar.gz
it would be the same changing the "format"
and the
"mediatype"
.
# Types of files
Support for Zip
and tar.gz
might be enough: hopefully everything can be
re-packaged using these formats.
To keep the implementation and testing testing: only one recursive level is
possible. A resource
can list resources
inside (like in the example). But
the inner resources cannot contain resources again.
# Missing values per field
# Overview
Characters representing missing values in a table can be defined for all fields in a Tabular Data Resource (opens new window) using the missingValues
(opens new window) property in a Table Schema. Values that match the missingValues
are treated as null
.
The Missing values per field pattern allows different missing values to be specified for each field in a Table Schema. If not specified, each field inherits from values assigned to missingValues
at the Tabular Data Resource level.
For example, this data…
item | description | price |
---|---|---|
1 | Apple | 0.99 |
tba | Banana | -1 |
3 | n/a | 1.20 |
…using this Table Schema…
"schema":{
"fields": [
{
"name": "item",
"title": "An inventory item number",
"type": "integer"
},
{
"name": "description",
"title": "item description",
"type": "string",
"missingValues": [ "n/a"]
},
{
"name": "price",
"title": "cost price",
"type": "number",
"missingValues": [ "-1"]
}
],
"missingValues": [ "tba", "" ]
}
…would be interpreted as…
item | description | price |
---|---|---|
1 | Apple | 0.99 |
null | Banana | null |
3 | null | 1.20 |
# Specification
A field MAY have a missingValues
property that MUST be an array
where each entry is a string
. If not specified, each field inherits from the values assigned to missingValues
(opens new window) at the Tabular Data Resource level.
# Implementations
None known.
# Table Schema: Enum labels and ordering
# Overview
Many software packages for manipulating and analyzing tabular data have special
features for working with categorical variables. These include:
- Value labels or formats (Stata (opens new window),
SAS (opens new window)
and SPSS (opens new window)) - Categoricals (Pandas) (opens new window)
- Factors ® (opens new window)
- CategoricalVectors (Julia) (opens new window)
These features can result in more efficient storage and faster runtime
performance, but more importantly, facilitate analysis by indicating that a
variable should be treated as categorical and by permitting the logical order
of the categories to differ from their lexical order. And in the case of value
labels, they permit the analyst to work with variables in numeric form (e.g.,
in expressions, when fitting models) while generating output (e.g., tables,
plots) that is labeled with informative strings.
While these features are of limited use in some disciplines, others rely
heavily on them (e.g., social sciences, epidemiology, clinical research,
etc.). Thus, before these disciplines can begin to use Frictionless in a
meaningful way, both the standards and the software tools need to support
these features. This pattern addresses necessary extensions to the
Table Schema (opens new window).
# Principles
Before describing the proposed extensions, here are the principles on which
they are based:
- Extensions should be software agnostic (i.e., no additions to the official
schema targeted toward a specific piece of software). While the extensions
are intended to support the use of features not available in all software,
the resulting data package should continue to work as well as possible with
software that does not have those features. - Related to (1), extensions should only include metadata that describe the
data themselves—not instructions for what a specific software package should
do with the data. Users who want to include the latter may do so within
a sub-namespace such ascustom
(e.g., see Issues #103 (opens new window)
and #663 (opens new window)). - Extensions must be backward compatible (i.e., not break existing tools,
workflows, etc. for working with Frictionless packages).
It is worth emphasizing that the scope of the proposed extensions is strictly
limited to the information necessary to make use of the features for working
with categorical data provided by the software packages listed above. Previous
discussions of this issue have occasionally included references to additional
variable-level metadata (e.g., multiple sets of category labels such as both
“short labels” and longer “descriptions”, or links to common data elements,
controlled vocabularies or rdfTypes). While these additional metadata are
undoubtedly useful, we speculate that the large majority of users who would
benefit from the extensions propopsed here would not have and/or utilize such
information, and therefore argue that these should be considered under a
separate proposal.
# Implementations
Our proposal to add a field-specific enumOrdered
property has been raised
here (opens new window) and
here (opens new window).
Discussions regarding supporting software providing features for working with
categorical variables appear in the following GitHub issues:
- https://github.com/frictionlessdata/specs/issues/156 (opens new window)
- https://github.com/frictionlessdata/specs/issues/739 (opens new window)
and in the Frictionless Data forum:
- https://discuss.okfn.org/t/can-you-add-code-descriptions-to-a-data-package/ (opens new window)
- https://discuss.okfn.org/t/something-like-rs-ordered-factors-or-enums-as-column-type/ (opens new window)
Finally, while we are unaware of any existing implementations intended for
general use, it is likely that many users are already exploiting the fact that
arbitrary fields may be added to the
table schema (opens new window)
to support internal implementations.
# Proposed extensions
We propose two extensions to Table Schema (opens new window):
- Add an optional field-specific
enumOrdered
property, which can be used
when contructing a categorical (or factor) to indicate that the variable is
ordinal. - Add an optional field-specific
enumLabels
property for use when data are
stored using integer or other codes rather than using the category labels.
This contains an object mapping the codes appearing in the data (keys) to
what they mean (values), and can be used by software to construct
corresponding value labels or categoricals (when supported) or to translate
the values when reading the data.
These extensions are fully backward compatible, since they are optional and
not providing them is valid.
Here is an example of a categorical variable using extension (1):
{
"fields": [
{
"name": "physical_health",
"type": "string",
"constraints": {
"enum": [
"Poor",
"Fair",
"Good",
"Very good",
"Excellent",
]
}
"enumOrdered": true
}
],
"missingValues": ["Don't know","Refused","Not applicable"]
}
This is our preferred strategy, as it provides all of the information
necessary to support the categorical functionality of the software packages
listed above, while still yielding a useable result for software without such
capability. As described below, value labels or categoricals can be created
automatically based on the ordering of the values in the enum
array, and the
missingValues
can be incorporated into the value labels or categoricals if
desired. In those cases where it is desired to have more control over how the
value labels are constructed, this information can be stored in a separate
file in JSON format or as part of a custom extension to the table schema.
Since such instructions do not describe the data themselves (but only how a
specific software package should handle them), and since they are often
software- and/or user-specific, we argue that they should not be included in
the official table schema.
Alternatively, those who wish to store their data in encoded form (e.g., this
is the default for data exports from REDCap (opens new window), a
commonly-used platform for collecting data for clinical studies) may use
extension (2) to do so:
{
"fields": [
{
"name": "physical_health",
"type": "integer",
"constraints": {
"enum": [1,2,3,4,5]
}
"enumOrdered": true,
"enumLabels": {
"1": "Poor",
"2": "Fair",
"3": "Good",
"4": "Very good",
"5": "Excellent"
}
}
],
"missingValues": ["Don't know","Refused","Not applicable"]
}
Note that although the field type is integer
, the keys in the enumLabels
object must be wrapped in double quotes because this is required by the JSON
file format.
A second variant of the example above is the following:
{
"fields": [
{
"name": "physical_health",
"type": "integer",
"constraints": {
"enum": [1,2,3,4,5]
}
"enumOrdered": true,
"enumLabels": {
"1": "Poor",
"2": "Fair",
"3": "Good",
"4": "Very good",
"5": "Excellent",
".a": "Don't know",
".b": "Refused",
".c": "Not applicable"
}
}
],
"missingValues": [".a",".b",".c"]
}
This represents encoded data exported from software with support for value
labels. The values .a
, .b
, etc. are known as extended missing values
(Stata and SAS only) and provide 26 unique missing values for numeric fields
(both integer and float) in addition to the system missing value (".
"); in
SPSS these would be replaced with specially designated integers, typically
negative (e.g., -97, -98 and -99).
# Specification
A field with an
enum
constraint or anenumLabels
property MAY have an
enumOrdered
property that MUST be a boolean. A value oftrue
indicates
that the field should be treated as having an ordinal scale of measurement,
with the ordering given by the order of the field’senum
array or by the
lexical order of theenumLabels
object’s keys, with the latter taking
precedence. Fields without anenum
constraint or anenumLabels
property
or for which theenumLabels
keys do not include all values observed
in the data (excluding any values specified in themissingValues
property) MUST NOT have anenumOrdered
property since in that case the
correct ordering of the data is ambiguous. The absence of anenumOrdered
property MUST NOT be taken to implyenumOrdered: false
.A field MAY have an
enumLabels
property that MUST be an object. This
property SHOULD be used to indicate how the values in the data (represented
by the object’s keys) are to be labeled or translated (represented by the
corresponding value). As required by the JSON format, the object’s keys
must be listed as strings (i.e., wrapped in double quotes). The keys MAY
include values that do not appear in the data and MAY omit some values that
do appear in the data. For clarity and to avoid unintentional loss of
information, the object’s values SHOULD be unique.
# Suggested implementations
Note: The use cases below address only reading data from a Frictionless data
package; it is assumed that implementations will also provide the ability to
write Frictionless data packages using the schema extensions proposed above.
We suggest two types of implementations:
Additions to the official Python Frictionless Framework to generate
software-specific scripts that may be executed by a specific software
package to read data from a Frictionless data package and create the
appropriate value labels or categoricals, as described below. These
scripts can then be included along with the data in the package itself.Software-specific extension packages that may be installed to permit users
of that software to read data from a Frictionless data package directly,
automatically creating the appropriate value labels or categoricals as
described below.
The advantage of (1) is that it doesn’t require users to install another
software package, which may in some cases be difficult or impossible. The
advantage of (2) is that it provides native support for working with
Frictionless data packages, and may be both easier and faster once the package
is installed. We are in the process of implementing both approaches for Stata;
implementations for the other software listed above are straightforward.
# Software that supports value labels (Stata, SAS or SPSS)
In cases where a field has an
enum
constraint but noenumLabels
property, automatically generate a value label mapping the integers 1, 2,
3, … to theenum
values in order, use this to encode the field (thereby
changing its type fromstring
tointeger
), and attach the value label
to the field. Provide option to skip automatically dropping values
specified in themissingValues
property and instead add them in order to
the end of the value label, encoded using extended missing values if
supported.In cases where the data are stored in encoded form (e.g., as integers) and
a correspondingenumLabels
property is present, and assuming that the
keys in theenumLabels
object are limited to integers and extended
missing values (if supported), use theenumLabels
object to generate a
value label and attach it to the field. As with (1), provide option to skip
automatically dropping values specified in themissingValues
property and
instead add them in order to the end of the value label, encoded using
extended missing values if supported.Although none of Stata, SAS or SPSS currently permits designating a specific
variable as ordered, Stata permits attaching arbitrary metadata to
individual variables. Thus, in cases where theenumOrdered
property is
present, this information can be stored in Stata to inform the analyst and
to prevent loss of information when generating Frictionless data packages
from within Stata.
# Software that supports categoricals or factors (Pandas, R, Julia)
In cases where a field has an
enum
constraint but noenumLabels
property, automatically define a categorical or factor using theenum
values in order, and convert the variable to categorical or factor type
using this definition. Provide option to skip automatically dropping values
specified in themissingValues
property and instead add them in order to
the end of theenum
values when defining the categorical or factor.In cases where the data are stored in encoded form (e.g., as integers) and
a correspondingenumLabels
property is present, translate the data using
theenumLabels
object, define a categorical or factor using the values of
theenumLabels
object in lexical order of the keys, and convert the
variable to categorical or factor type using this definition. Provide
option to skip automatically dropping values specified in the
missingValues
property and instead add them to the end of the
enumLabels
values when defining the categorical or factor.In cases where a field has an
enumOrdered
property, use that when
defining the categorical or factor.
# All software
Although the extensions proposed here are intended primarily to support the
use of value labels and categoricals in software that supports them, they also
provide additional functionality when reading data into any software that can
handle tabular data. Specifically, the enumLabels
property may be used to
support the use of enums even in cases where value labels or categoricals are
not being used. For example, it is standard practice in software for analyzing
genetic data to code sex as 0, 1 and 2 (corresponding to “Unknown”, “Male” and
“Female”) and affection status as 0, 1 and 2 (corresponding to “Unknown”,
“Unaffected” and “Affected”). In such cases, the enumLabels
property may be
used to confirm that the data follow the standard convention or to indicate
that they deviate from it; it may also be used to translate those codes into
human-readable values, if desired.
# Notes
While this pattern is designed as an extension to Table Schema (opens new window) fields, it could also be used to document enum
values of properties in profiles (opens new window), such as contributor roles.
This pattern originally included a proposal to add an optional field-specific
missingValues
property similar to that described in the pattern
“missing values per field (opens new window)”
appearing in this document above. The objective was to provide a mechnanism to
distinguish between so-called system missing values (i.e., values that
indicate only that the corresponding data are missing) and other values that
convey meaning but are typically excluded when fitting statistical models. The
latter may be represented by extended missing values (.a
, .b
, .c
,
etc.) in Stata and SAS, or in SPSS by negative integers that are then
designated as missing by using the MISSING VALUES
command. For example,
values such as “NA”, “Not applicable”, “.”, etc. could be specified in the
resource level missingValues
property, while values such as “Don’t know” and
“Refused”—often used when generating tabular summaries and occasionally used
when fitting certain statistical models—could be specified in the
corresponding field level missingValues
property. The former would still be
converted to null
before type-specific string conversion (just as they are
now), while the latter could be used by capable software when creating value
labels or categoricals.
While this proposal was consistent with the principles outlined at the
beginning (in particular, existing software would still yield a usable result
when reading the data), we realized that it would conflict with what appears
to be an emerging consensus regarding field-specific missingValues
; i.e.,
that they should replace the less specific resource level missingValues
for the corresponding field rather than be combined with them (see the discussion
here (opens new window) as well as the
“missing values per field (opens new window)”
pattern above). While there is good reason for replacing rather than combining
here (e.g., it is more explicit), it would unfortunately conflict with the
idea of using the field-specific missingValues
in conjunction with the
resource level missingValues
as just described; namely, if the
field-specific property replaced the resource level property then the system
missing values would no longer be converted to null
, as desired.
For this reason, we have dropped the proposal to add a field-specific
missingValues
property from this pattern, and assert that implementation of
this pattern by software should assume that if a field-specific missingValues
property is added to the
table schema (opens new window)
it should, if present, replace the resource level missingValues
property for
the corresponding field. We do not believe that this change represents a
substantial limitation when creating value labels or categoricals, since
system missing values can typically be easily distinguished from other missing
values when exported in CSV format (e.g., “.” in Stata or SAS, “NA” in R, or
“” in Pandas).
# Table Schema: Relationship between Fields
# Overview
The structure of tabular datasets is simple: a set of Fields grouped in a table.
However, the data present is often complex and reflects an interdependence between Fields (see explanations in the Internet-Draft NTV tabular format (NTV-TAB) (opens new window)).
Let’s take the example of the following dataset:
country | region | code | population |
---|---|---|---|
France | European Union | FR | 449 |
Spain | European Union | ES | 48 |
Estonia | European Union | ES | 449 |
Nigeria | Africa | NI | 1460 |
The data schema for this dataset indicates in the Field Descriptor “description”:
- for the “code” Field : “country code alpha-2”
- for the “population” Field: “region population in 2022 (millions)”
If we now look at the data we see that this dataset is not consistent because it contains two structural errors:
- The value of the “code” Field must be unique for each country, we cannot therefore have “ES” for “Spain” and “Estonia”,
- The value of the “population” Field of “European Union” cannot have two different values (449 and 48)
These structural errors make the data unusable and yet they are not detected in the validation of the dataset (in the current version of Table Schema, there are no Descriptors to express this dependency between two fields).
The purpose of this specification is therefore on the one hand to express these structural constraints in the data schema and on the other hand to define the controls associated with the validation of a dataset.
# Context
This subject was studied and treated for databases and led to the definition of a methodology for specifying relationships and to the implementation of consistent relational databases.
The methodology is mainly based on the Entity–relationship model (opens new window):
An entity–relationship model (or ER model) describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist between entities (instances of those entity types).
The Entity–relationship model is broken down according to the conceptual-logical-physical hierarchy.
The Relationships are expressed literally by a name and in a structured way by a cardinality (opens new window).
The Entity–relationship model for the example presented in the Overview is detailed in this NoteBook (opens new window).
# Principles
Two aspects need to be addressed:
relationship expression:
This methodology applied for databases can also be applied for tabular data whose structure is similar to that of relational database tables but whose representation of relationships is different (see patterns (opens new window) used in tabular representations).
This variation is explained in the linked notebook (opens new window) and presented in the example (opens new window).
Using a data model is a simple way to express relationships but it is not required. We can express the relationships directly at the data schema level.
validity of a dataset:
Checking the validity of a relationship for a defined dataset is one of the functions of tabular structure analysis (opens new window). It only requires counting functions accessible for any type of language (see example of implementation (opens new window)).
# Proposed extensions
A relationship is defined by the following information:
- the two Fields involved (the order of the Fields is important with the “derived” link),
- the textual representation of the relationship,
- the nature of the relationship
Three proposals for extending Table Schema are being considered:
- New Field Descriptor
- New Constraint Property
- New Table Descriptor
After discussions only the third is retained (a relationship between fields associated to a Field) and presented below:
New Table Descriptor:
A
relationships
Table Descriptor is added.
The properties associated with this Descriptor could be:fields
: array with the names of the two Fields involveddescription
: description string (optional)link
: nature of the relationship
Pros:
- No mixing with Fields descriptors
Cons:
- Need to add a new Table Descriptor
- The order of the Fields in the array is important with the “derived” link
Example:
{ "fields": [ ], "relationships": [ { "fields" : [ "country", "code"], "description" : "is the country code alpha-2 of", "link" : "coupled" } { "fields" : [ "region", "population"], "description" : "is the population of", "link" : "derived"} ] }
# Specification
Assuming solution 3 (Table Descriptor), the specification could be as follows:
The relationships
Descriptor MAY be used to define the dependency between fields.
The relationships
Descriptor, if present, MUST be an array where each entry in the array is an object and MUST contain two required properties and one optional:
fields
: Array with the propertyname
of the two fields linked (required)link
: String with the nature of the relationship between them (required)description
: String with the description of the relationship between the two Fields (optional)
The link
property value MUST be one of the three following :
derived
:- The values of the child (second array element) field are dependant on the values of the parent (first array element) field (i.e. a value in the parent field is associated with a single value in the child field).
- e.g. The “name” field [ “john”, “paul”, “leah”, “paul” ] and the “Nickname” field [ “jock”, “paulo”, “lili”, “paulo” ] are derived,
- i.e. if a new entry “leah” is added, the corresponding “nickname” value must be “lili”.
coupled
:- The values of one field are associated to the values of the other field.
- e.g. The “Country” field [ “france”, “spain”, “estonia”, “spain” ] and the “code alpha-2” field [ “FR”, “ES”, “EE”, “ES” ] are coupled,
- i.e. if a new entry “estonia” is added, the corresponding “code alpha-2” value must be “EE” just as if a new entry “EE” is added, the corresponding “Country” value must be “estonia”.
crossed
:- This relationship means that all the different values of one field are associated with all the different values of the other field.
- e.g. the “Year” Field [ 2020, 2020, 2021, 2021] and the “Population” Field [ “estonia”, “spain”, “estonia”, “spain” ] are crossed
- i.e the year 2020 is associated to population of “spain” and “estonia”, just as the population of “estonia” is associated with years 2020 and 2021
# Implementations
The implementation of a new Descriptor is not discussed here (no particular point to address).
The control implementation is based on the following principles:
- calculation of the number of different values for the two Fields,
- calculation of the number of different values for the virtual Field composed of tuples of each of the values of the two Fields
- comparison of these three values to deduce the type of relationship
- comparison of the calculated relationship type with that defined in the data schema
The implementation example (opens new window) presents calculation function.
An analysis tool (opens new window) is also available and accessible from pandas data.
An example of implementation as custom_check
is available here (opens new window).
# Notes
If the relationships are defined in a data model, the generation of the relationships in the data schema can be automatic.
The example presented in the Overview and the rule for converting a Data model into a Table schema are detailed in this NoteBook (opens new window).
A complete example (60 000 rows, 50 fields) is used to validate the methodology and the tools: open-data IRVE (opens new window)