In my previous post What to Desire from a Good Image Annotator?, I wrote about the high-level capabilities of an Image Annotation Tool. In this one, I will go over the requirements for the actual image annotations or as you may also know it, tagging. I will use two images as examples. The first one is a scanned receipt. The receipt example can be used to generalize the broader category of scanned documents, whether financial, legal, or others. The second example is of a cityscape. That one can be used to generalize any other image.

Annotating Store Receipt

Let’s start with the receipt. A receipt is a scanned document that contains financial information. Below is just one way that you may want to annotate a receipt.

Annotated receipt

In this example, I have decided to annotate the receipt using the logical grouping of information printed on it. Each region is a rectangle that contains the part of the image that belongs together. Here is the list of regions and their possible annotations:

  • Region ID: 1
    Annotation: Store Logo
    Description: This can be the store logo or just the name printed on the receipt
  • Region ID: 2
    Annotation: Store Details
    Description: This can include information like address, phone number, store number, etc.
  • Region ID: 3
    Annotation: Receipt Metadata
    Description: This can be the date and time, receipt number as well as another receipt specific metadata
  • Region ID: 4
    Annotation: Cashier Details
    Description: This is information about the cashier
  • Region ID: 5
    Annotation: Items
    Description: Those are the purchased items, the quantities and the individual item price
  • Region ID: 6
    Annotation: Receipt Summary
    Description: This is the summary of the information for the purchase like subtotal amount, tax and the total amount
  • Region ID: 7
    Annotation: Customer Information
    Description: This is information about the customer and any loyalty programs she or he participates to
  • Region ID: 8
    Annotation: Merchant Details
    Description: This is additional information about the merchant
  • Region ID: 9
    Annotation: Transaction Type
    Description: This is information about the transaction
  • Region ID: 10
    Annotation: Transaction Details
    Description: This contains information about the transaction with the payment card processor. It can include transaction ID, the card type and number, timestamp, authorization code, etc.
  • Region ID: 11
    Annotation: Transaction Amounts
    Description: This summarizes the amounts for the transaction with the payment card processor
  • Region ID: 12
    Annotation: Transaction Status
    Description: This is the status of the transaction – i.e., Approved or Declined
  • Region ID: 13
    Annotation: Transaction Info
    Description: Those are technical details about the transaction
  • Region ID: 14
    Annotation: Copy Owner
    Description: This is information about the ownership of the receipt. Usually, this is Merchant or Customer
  • Region ID: 15, 16, and 17
    Annotation: Additional Details
    Description: Those can be various things like return policies, disclaimers, advertisement, surveys, notes, and so on. In this example, we have 15 as Return Policy, 16 as Survey and 17 as Additional Notes

When you think about it, the above areas will be the ones that your eyesd will immediately look to find information. For example, if you want to know what store the receipt was from, you will directly look at the top where the logo should be (Region #1); if you want to know what the total amount is, your eyes will steer towards the receipt summary (Region #6) and so on. Majority of us will follow a similar approach for separating the data because it is something that we do every day in our minds.

Few things to note about the annotations above. First, not every receipt will have all the information from above. Some receipts will have more and some less. Second, annotations evolve. After annotating a certain number of receipts, you start building a pattern and make fewer changes the more you annotate. However, after some time, you may discover that the patterns you developed need to be updated. A straightforward example is a better name for the annotation. If this happens, you need to go back and change the names. Third, there is no standard way to name those annotations. You and I will undoubtedly have different names for the same thing.

Now, let’s write a few requirements from this receipt example.

  1. The first thing we did is to draw the rectangular regions that we want to annotate. And this is our first and simplest requirement.
  2. The second thing we did is to annotate the rectangular region. When we create the annotation, we should be able to add additional information like description of the annotation
  3. The third thing we want is to be able to update annotation information retrospectively.

Those are good as a beginning. But to provide more context and backup our requirements, it will be useful to think about how those annotations will be used, i.e., define our use cases. I kind of hinted to those above.

Use Case #1: Logo Recognition

Let’s say; you are developing classification application that is used to recognize the store the receipt if from. You can easily do this by looking at the store logo only and develop a machine learning algorithm that returns the name of the store by recognizing the logo. For this, the only region you will need is Region 1 with the logo. Thus, you can just cut this region from the receipt and teach your algorithm only on the logo. That way you minimize the noise from the rest of the receipt and your algorithm can have better accuracy.

Use Case #2: Receipt Amount Extraction

If your application needs to extract the summary amounts from the receipt, you can concentrate on Region 6. That region contains all the information you will need. Few things you can do with this region are:

  • Binarize the area
  • Straighten the text
  • OCR the text
  • Analyze the extracted text (not an image related task anymore:))

This use case is applicable for any other are you annotated on the receipt. It doesn’t matter whether you want to obtain the credit card number or the timestamp; the approach will be the same.

Nested Annotations

Now, let’s look at another way to annotate the same receipt.

Annotated Receipt

If your application needs to determine what are your shopping habbits based on geography, you will need to extract detailed information about the store location. Thus, you will want to annotate the receipt as above to know which part is the street address, which is the city, etc. But those regions are all nested in Region 2 from our first annotation pass. It will be useful to have both types of annotations and use them for different use cases.

So, the requirements for the tool will be:

That is also very relevant in the next example, where we have areas with buildings but also want to annotate a single building.

Annotating Cityscapes

Annotating landscapes, cityscapes or other images with real objects is very similar to the receipt annotation. However, real objects rarely have regular shapes in pictures. Here is an example from a picture I took in Tokyo some time ago.

Annotated Cityscape

In this example, I have annotated only a few of the objects: two buildings (1 and 2), a crane (3), soccer field (4) and a tree (5). The requirements for annotating landscapes are not too different from the requirements for annotating documents. There is just one more thing we need to add to the tool to support real object tagging:

There are many use cases that you can develop for real-object recognition, and for that, versatile annotation capabilities will be important in any tool.

Additional Requirements for Annotations

All requirements that I have listed above are specific to the objects or areas in the pictures. However, we need to have the ability to add meta information to the whole picture. Well, you may think we already have a way to do that! We can use the EXIF data. The EXIF data is helpful, and it is automatically populated by the camera or the editing tool. However, it has limited capabilities for free-form meta-information because its fields are standardized.

For example, if you want to capture information who annotated the image last and at what time, you cannot use the EXIF fields for that. You can repurpose some EXIF fields, but you will lose the the original information. What we need is a simple way to create key-value metadata for the image. Of course, having the ability to see the EXIF information would be a helpful feature, although maybe not a high priority one.

With all that, I believe we have enough requirements to start working on tool design. If you are curious to follow the development or participate in it, you can head over to the Image Annotator Github project. The next thing we need to do is to do some design work. That includes UI design, back-end design, and data model.

Recently, I started looking for an image annotation tool that we can use to annotate a few thousand images. There are quite a few that you can choose from. Starting with paid ones like Labelbox and Dataturks, to free and open source ones like coco-annotator, imglab and labelimg. The paid ones have also a free tier that is limited but can give you a sense of their capabilities.

Each one of those tools can be used to create an image dataset, but without going into details in each one of them, here are the things we were looking to have to consider an image annotation tool fit for our needs:

  • First, and foremost – usability. I like nice UI, but it shouldn’t take too much time and clicks to get to the core functionality of the tool – i.e., annotating images. Navigating within the annotator, creating projects and adding images should be smooth and easy. Users shouldn’t be tech savvy to set up and start using the annotation tool.
  • Second is security. I wouldn’t put this one so high on the list if I didn’t need to deal with sensitive data. Unfortunately, even some of the commercially available annotation tools neglected basic security features like encryption. Annotating healthcare images like x-rays or financial documents on such platforms will be out of the question. Also, detailed privacy and data handling policies are crucial for handling sensitive images.
  • The third is scalability. The stand-alone annotation tools are limited to the resource of the machine they are running on, but the hosted ones lack detailed information about the limits they impose on the various plans. In addition to the number of projects and users, which are most often quoted in the commercial plans, the number of images per dataset or the total amount of storage available will be good to know.
  • Fourth is versatility. And, with versatility, I don’t mean only the ways one annotates images (polygons vs. rectangles for example) but also export formats like COCO, PASCAL VOC, YOLO, etc.; ability to choose different storage backends like files, AWS S3, Azure Storage or Dropbox and so on. Like everything else in technology, there is no single standard, and a good image annotation tool should satisfy the needs of different audiences.
  • The fifth is collaboration. An image annotator should allow many (if not thousands of) people to collaborate on the project. Datasets like PASCAL VOC or COCO consists of hundreds of thousands of images and millions of annotations, work that is beyond the scale of a single person or small team. Enabling lightweight access to external collaborators is crucial to the success of such a tool.

OK! Let’s be a little bit more concrete about what requirements I have for an image annotation tool.

Image Annotation Tools Usability

Before we go into the usability requirements, let’s look at the typical steps a user must go through to start annotating images:

  1. Select the tool or the service
  2. Install the tool or sign up to the service
  3. Create a project or data set
  4. Chose data storage location (if applicable)
  5. Determine users and user access (if applicable)
  6. Upload image
  7. Annotate image

To select the tool and the service, users need to be clear about what they are getting. Providing a good description of the features the annotation tool offers is crucial for the successful selection. Some developers rely on trials, but I would like to save my time for registering and installation if I know upfront the tool will not work for me.

If the tool is a stand-alone tool, its installation should not be a hassle. A lot of the open source tools rely on the tech savviness of their users, which can be an obstacle. Besides, I am reluctant installing something on my machine if it will turn out not what I need. Trials and easy removal instructions are crucial.

One of the biggest usability issues I saw in the tools are the convoluted flows for creating projects and datasets. Either bugs or unclear flows resulted in a lot of frustration when trying out some of the tools. For me it is simple – it should follow the ages old concept of files and folders.

Being clear where the data is stored and how to access it or get it out of there is important. For some of the tools I have tested, it took a while to understand where the data is; others used some proprietary structures and (surprisingly) no export capabilities.

Adding users and determining access is always a hassle (and not only in image annotation tools). Still, there should be a clear workflow for doing that as well as opening the dataset to the public if needed.

Although I may have an existing dataset, I may want to add new images to it – either one by one or in bulk. This should be one of the most prominent flows in the annotation tool. At any point in the UI I should be able to easily upload a single or multiple files.

For me, the annotation UI should mimic the UI of the traditional image editing software like Adobe Photoshop. You have menus on the top, tools on the left, working are in the middle and properties on the right. Well, it may be boring or not modern but it is familiar and intuitive.

Securing Annotated Images

We deal with scanned financial documents that can contain highly sensitive information like names, addresses, some times credit card details, account numbers or even social security numbers. Some of our customers would like to have a tool that allows them to annotate medical images like x-rays – those images can also contain personal information in their metadata (if, for example DICOM format is used).

Unless the annotation tool is a standalone tool that people can install on their local machine, using secure HTTPS is a no-brainer and the least you can do from security point of view (surprisingly some of the SaaS services lacked even here). However, security goes far beyond that. Things that should be added are:

  • Encrypting the storage where the annotated images are stored. Hosted or self-managed keys should be allowed.
  • Proper authentication mechanisms should be added. Multi-Factor-Authentication should be used for higher security.
  • Good Role Based Access Control (RBAC) should be implemented. For example some people should be able to just view the annotated images, while others to annotate and edit those.
  • Change logs should be kept as part of the application. For example, will be important to know who certain annotation and whether it was correct or not.

Scalability for Image Annotators

A good dataset can contain hundreds of thousands of images – the COCO dataset for 2017 has 118K images in its training dataset. Depending on the quality of the images, the storage needed to store those can vary from 10s of GB to 100s of GB to PB and more. Having an ability to grow the storage is essential to the success of an image annotation tool.

On the users side, a dataset of hundred thousand images may require hundreds of people to annotate. Being able to support large userbase without a huge impact on the cost is also important (hence, the user-based licensing may not be the best option for such a SaaS offering because a single user may annotate only 2-3 images from the whole dataset).

The back-end or APIs handling the annotation of the images should also be able to scale to the number of images and users without problems.

Versatile Export Options for Annotated Images

Rarely the image annotation tool is tightly coupled with the machine learning system that will use the images. Also, the same annotations can be used by various teams using different systems to create the machine learning models. Clear explanation of the format used to store the annotations is a must-have but also the ability to export the annotations in common formats will be essential for the success and usefulness of the tool.

The word “export” hear may mean different things. It doesn’t always need to be download the images and annotations in the desired format but simply saving the annotations in this format.

I would start with defining a versatile format for storing the image annotations and then offer different “export” options, whether for download or just conversion in the original storage.

Collaborating While Annotating Images

Having a single person create an image dataset with hundreds of thousands of images is unrealistic. Such a task requires the collaboration of many people who can be spread around the world. Having the ability to not only give them access to annotate the images but also to comment and give suggestions to already existing annotations is a feature that should be high on the priority list.

Annotations, like software, are not free of bugs. Hence, the image annotation tool should allow for collaboration similar to what modern software development tools enable. This may not be V1 feature but should certainly come soon after.

Now, that I have a good idea what we would like to have from an image annotation tool, it is time to think of how to implement one that incorporates the above mentioned functionality. In the next post, I will look at what we would like to annotate and how to approach the data model for annotations.