Serialization

Various serialization formats exist for transmitting structured data over the network: JSON is a popular choice amongst many public APIs partly because its human readable, while a more compact format, such as Protocol Buffers, may be more appropriate for a private API used within an organization.

Regardless what serialization format your API uses, Uplink – with a little bit of help – can automatically decode responses and encode request bodies to and from Python objects using the selected format. This neatly abstracts the HTTP layer from your API client, so callers can operate on objects that make sense to your model instead of directly dealing with the underlying protocol.

This document walks you through how to leverage Uplink’s serialization support, including integrations for third-party serialization libraries like marshmallow and tools for writing custom conversion strategies that fit your unique needs.

Using Marshmallow Schemas

marshmallow is a framework-agnostic, object serialization library for Python. Uplink comes with built-in support for Marshmallow; you can integrate your Marshmallow schemas with Uplink for easy JSON (de)serialization.

First, create a marshmallow.Schema, declaring any necessary conversions and validations. Here’s a simple example:

import marshmallow

class RepoSchema(marshmallow.Schema):
    full_name = marshmallow.fields.Str()

    @marshmallow.post_load
    def make_repo(self, data):
        owner, repo_name = data["full_name"].split("/")
        return Repo(owner=owner, name=repo_name)

Then, specify the schema using the @returns decorator:

class GitHub(Consumer):
   @returns(RepoSchema(many=True))
   @get("users/{username}/repos")
   def get_repos(self, username):
      """Get the user's public repositories."""

Python 3 users can use a return type hint instead:

class GitHub(Consumer):
   @get("users/{username}/repos")
   def get_repos(self, username) -> RepoSchema(many=True)
      """Get the user's public repositories."""

Your consumer should now return Python objects based on your Marshmallow schema:

github = GitHub(base_url="https://api.github.com")
print(github.get_repos("octocat"))
# Output: [Repo(owner="octocat", name="linguist"), ...]

For a more complete example of Uplink’s marshmallow support, check out this example on GitHub.

Serializing Method Arguments

Most method argument annotations like Field and Body accept a type parameter that specifies the method argument’s expected type or schema, for the sake of serialization.

For example, following the marshmallow example from above, we can specify the RepoSchema as the type of a Body argument:

from uplink import Consumer, Body

class GitHub(Consumer):
   @json
   @post("user/repos")
   def create_repo(self, repo: Body(type=RepoSchema)):
      """Creates a new repository for the authenticated user."""

Then, the repo argument should accept instances of Repo, to be serialized appropriately using the RepoSchema with Uplink’s marshmallow integration (see Using Marshmallow Schemas for the full setup).

repo = Repo(name="my_favorite_new_project")
github.create_repo(repo)

Custom JSON Conversion

Recognizing JSON’s popularity amongst public APIs, Uplink provides some out-of-the-box utilities to adding JSON serialization support for your objects simple.

Deserialization

@returns.json is handy when working with APIs that provide JSON responses. As its leading positional argument, the decorator accepts a class that represents the expected schema of JSON body:

class GitHub(Consumer):
    @returns.json(User)
    @get("users/{username}")
    def get_user(self, username): pass

Python 3 users can alternatively use a return type hint:

 class GitHub(Consumer):
    @returns.json
    @get("users/{username}")
    def get_user(self, username) -> User: pass

Next, if your objects (e.g., User) are not defined using a library for which Uplink has built-in support (such as marshmallow), you will also need to register a converter that tells Uplink how to convert the HTTP response into your expected return type.

To this end, we can use @loads.from_json to define a simple JSON reader for User:

from uplink import loads

 @loads.from_json(User)
 def user_json_reader(user_cls, json):
     return user_cls(json["id"], json["username"])

The decorated function, user_json_reader(), can then be passed into the converter constructor parameter when instantiating a uplink.Consumer subclass:

github = GitHub(base_url=..., converter=user_json_reader)

Alternatively, you can add the @uplink.install decorator to register the converter function as a default converter, meaning the converter will be included automatically with any consumer instance and doesn’t need to be explicitly provided through the converter parameter:

from uplink import loads, install

 @install
 @loads.from_json(User)
 def user_json_reader(user_cls, json):
     return user_cls(json["id"], json["username"])

At last, calling the GitHub.get_user() method should now return an instance of our User class:

github.get_user("octocat")
# Output: [User(id=583231, name="The Octocat"), ...]

Serialization

@json is a decorator for Consumer methods that send JSON requests. Using this decorator requires annotating your arguments with either Field or Body. Both annotations support an optional type argument for the purpose of serialization:

from uplink import Consumer, Body

class GitHub(Consumer):
   @json
   @post("user/repos")
   def create_repo(self, user: Body(type=Repo)):
      """Creates a new repository for the authenticated user."""

Similar to deserialization case, we must register a converter that tells Uplink how to turn the Repo object to JSON, since the class is not defined using a library for which Uplink has built-in support (such as marshmallow).

To this end, we can use @dumps.to_json to define a simple JSON writer for Repo:

from uplink import dumps

 @dumps.to_json(Repo)
 def repo_json_writer(repo_cls, repo):
     return {"name": repo.name, "private": repo.is_private()}

The decorated function, repo_json_writer(), can then be passed into the converter constructor parameter when instantiating a uplink.Consumer subclass:

github = GitHub(base_url=..., converter=repo_json_writer)

Alternatively, you can add the @uplink.install decorator to register the converter function as a default converter, meaning the converter will be included automatically with any consumer instance and doesn’t need to be explicitly provided through the converter parameter:

from uplink import loads, install

 @install
 @dumps.to_json(Repo)
 def repo_json_writer(user_cls, json):
     return {"name": repo.name, "private": repo.is_private()}

Now, we should be able to invoke the GitHub.create_repo() method with an instance of Repo:

repo = Repo(name="my_new_project", private=True)
github.create_repo(repo)

Converting Collections

Data-driven web applications, such as social networks and forums, devise a lot of functionality around large queries on related data. Their APIs normally encode the results of these queries as collections of a common type. Examples include a curated feed of posts from subscribed accounts, the top restaurants in your area, upcoming tasks* on a checklist, etc.

You can use the other strategies in this section to add serialization support for a specific type, such as a post or a restaurant. Once added, this support automatically extends to collections of that type, such as sequences and mappings.

For example, consider a hypothetical Task Management API that supports adding tasks to one or more user-created checklists. Here’s the JSON array that the API returns when we query pending tasks on a checklist titled “home”:

[
    {
       "id": 4139
       "name": "Groceries"
       "due_date": "Monday, September 3, 2018 10:00:00 AM PST"
    },
    {
       "id": 4140
       "title": "Laundry"
       "due_date": "Monday, September 3, 2018 2:00:00 PM PST"
    }
]

In this example, the common type could be modeled in Python as a namedtuple, which we’ll name Task:

Task = collections.namedtuple("Task", ["id", "name", "due_date"])

Next, to add JSON deserialization support for this type, we could create a custom converter using the @loads.from_json decorator, which is a strategy covered in the subsection Custom JSON Conversion. For the sake of brevity, I’ll omit the implementation here, but you can follow the link above for details.

Notably, Uplink lets us leverage the added support to also handle collections of type Task. The uplink.types module exposes two collection types, List and Dict, to be used as function return type annotations. In our example, the query for pending tasks returns a list:

from uplink import Consumer, returns, get, types

class TaskApi(Consumer):
   @returns.json
   @get("tasks/{checklist}?due=today")
   def get_pending_tasks(self, checklist) -> types.List[Task]

If you are a Python 3.5+ user that is already leveraging the typing module to support type hints as specified by PEP 484 and PEP 526, you can safely use typing.List and typing.Dict here instead of the annotations from uplink.types:

import typing
from uplink import Consumer, returns, get

class TaskApi(Consumer):
   @returns.json
   @get("tasks/{checklist}?due=today")
   def get_pending_tasks(self, checklist) -> typing.List[Task]

Now, the consumer can handle these queries with ease:

>>> task_api.get_pending_tasks("home")
[Task(id=4139, name='Groceries', due_date='Monday, September 3, 2018 10:00:00 AM PST'),
 Task(id=4140, name='Laundry', due_date='Monday, September 3, 2018 2:00:00 PM PST')]

Note that this feature works with any serialization format, not just JSON.

Writing A Custom Converter

Extending Uplink’s support for other serialization formats or libraries (e.g., XML, Thrift, Avro) is pretty straightforward.

When adding support for a new serialization library, create a subclass of converters.Factory, which defines abstract methods for different serialization scenarios (deserializing the response body, serializing the request body, etc.), and override each relevant method to return a callable that handles the method’s corresponding scenario.

For example, a factory that adds support for Python’s pickle protocol could look like:

import pickle

from uplink import converters

class PickleFactory(converters.Factory):
   """Adapter for Python's Pickle protocol."""

   def create_response_body_converter(self, cls, request_definition):
      # Return callable to deserialize response body into Python object.
      return lambda response: pickle.loads(response.content)

   def create_request_body_converter(self, cls, request_definition):
      # Return callable to serialize Python object into bytes.
      return pickle.dumps

Then, when instantiating a new consumer, you can supply this implementation through the converter constructor argument of any Consumer subclass:

client = MyApiClient(BASE_URL, converter=PickleFactory())

If the added support should apply broadly, you can alternatively decorate your converters.Factory subclass with the @uplink.install decorator, which ensures that Uplink automatically adds the factory to new instances of any Consumer subclass. This way you don’t have to explicitly supply the factory each time you instantiate a consumer.

from uplink import converters, install

@install
class PickleFactory(converters.Factory):
   ...

For a concrete example of extending support for a new serialization format or library with this approach, checkout this Protobuf extension for Uplink.