# Python SDK

The Python SDK is a convenient way to access the TAWNY REST API with Python.

# Installation

The TAWNY Python SDK is available from PyPI. You can install it with

pip install tawnyapi

# CLI usage

With the TAWNY Python SDK installed, you can use the TAWNY API via the command line:

python -m tawnyapi.vision.cli analyze
    --apikey <YOUR_API_KEY>
    --image <PATH_TO_THE_IMAGE>
    [--maxresults <MAX_RESULTS>]
    [--resize <IMAGE_SIZE>]
    [--feature <FEATURE_NAME>]

You can use the following parameters:

  • --apikey: Your API key to access the TAWNY API.
  • --image: The path to the image you want to analyze. You can add the --image parameter more than once, if you want to analyze several images in a single request.
  • --maxresults: The maximum number of faces which should be analyzed per image. Faces are ordered by the size of their bounding box, from large to small. Default is 1.
  • --resize: Allows you to resize the image before sending it to the API (smaller images are processed faster). The parameter expects a single integer value which defines the maximum size of the longer side of the image. Default is 720.
  • --feature: Let's you define which types of analyses you want to run on the images. For multiple analyses you can add the --feature parameter more than once. Available features are FACE_DETECTION, FACE_EMOTION, FACE_LANDMARKS and FACE_DESCRIPTOR. The default set of features is FACE_DETECTION and FACE_EMOTION.
  • --uselocalfacedetection: By default, face detection is run on the server. By setting this parameter, you can run the face detection algorithm locally on your machine, which means that only the cropped faces are sent to the server for the emotion analysis. Most of the time, this is faster than sending the whole image to the server.

# Programmatic usage

If you want to use the API client in your code, you can follow this minimal example:

from tawnyapi.vision.client import TawnyVisionApiClient

client = TawnyVisionApiClient(api_key=<YOUR_KEY>)

# Analyzing images from files:
result = client.analyze_images_from_paths(
    image_paths=[<IMAGE_PATH_1>, <IMAGE_PATH_2>],

# Analyzing images already in memory:
result = client.analyze_images(
    images=[<IMAGE_1>, <IMAGE_2>],