Recently, I started looking for an image annotation tool that we can use to annotate a few thousand images. There are quite a few that you can choose from. Starting with paid ones like Labelbox and Dataturks, to free and open source ones like coco-annotator, imglab and labelimg. The paid ones have also a free tier that is limited but can give you a sense of their capabilities.
Each one of those tools can be used to create an image dataset, but without going into details in each one of them, here are the things we were looking to have to consider an image annotation tool fit for our needs:
- First, and foremost – usability. I like nice UI, but it shouldn’t take too much time and clicks to get to the core functionality of the tool – i.e., annotating images. Navigating within the annotator, creating projects and adding images should be smooth and easy. Users shouldn’t be tech savvy to set up and start using the annotation tool.
- Second is security. I wouldn’t put this one so high on the list if I didn’t need to deal with sensitive data. Unfortunately, even some of the commercially available annotation tools neglected basic security features like encryption. Annotating healthcare images like x-rays or financial documents on such platforms will be out of the question. Also, detailed privacy and data handling policies are crucial for handling sensitive images.
- The third is scalability. The stand-alone annotation tools are limited to the resource of the machine they are running on, but the hosted ones lack detailed information about the limits they impose on the various plans. In addition to the number of projects and users, which are most often quoted in the commercial plans, the number of images per dataset or the total amount of storage available will be good to know.
- Fourth is versatility. And, with versatility, I don’t mean only the ways one annotates images (polygons vs. rectangles for example) but also export formats like COCO, PASCAL VOC, YOLO, etc.; ability to choose different storage backends like files, AWS S3, Azure Storage or Dropbox and so on. Like everything else in technology, there is no single standard, and a good image annotation tool should satisfy the needs of different audiences.
- The fifth is collaboration. An image annotator should allow many (if not thousands of) people to collaborate on the project. Datasets like PASCAL VOC or COCO consists of hundreds of thousands of images and millions of annotations, work that is beyond the scale of a single person or small team. Enabling lightweight access to external collaborators is crucial to the success of such a tool.
OK! Let’s be a little bit more concrete about what requirements I have for an image annotation tool.
Image Annotation Tools Usability
Before we go into the usability requirements, let’s look at the typical steps a user must go through to start annotating images:
- Select the tool or the service
- Install the tool or sign up to the service
- Create a project or data set
- Chose data storage location (if applicable)
- Determine users and user access (if applicable)
- Upload image
- Annotate image
To select the tool and the service, users need to be clear about what they are getting. Providing a good description of the features the annotation tool offers is crucial for the successful selection. Some developers rely on trials, but I would like to save my time for registering and installation if I know upfront the tool will not work for me.
If the tool is a stand-alone tool, its installation should not be a hassle. A lot of the open source tools rely on the tech savviness of their users, which can be an obstacle. Besides, I am reluctant installing something on my machine if it will turn out not what I need. Trials and easy removal instructions are crucial.
One of the biggest usability issues I saw in the tools are the convoluted flows for creating projects and datasets. Either bugs or unclear flows resulted in a lot of frustration when trying out some of the tools. For me it is simple – it should follow the ages old concept of files and folders.
Being clear where the data is stored and how to access it or get it out of there is important. For some of the tools I have tested, it took a while to understand where the data is; others used some proprietary structures and (surprisingly) no export capabilities.
Adding users and determining access is always a hassle (and not only in image annotation tools). Still, there should be a clear workflow for doing that as well as opening the dataset to the public if needed.
Although I may have an existing dataset, I may want to add new images to it – either one by one or in bulk. This should be one of the most prominent flows in the annotation tool. At any point in the UI I should be able to easily upload a single or multiple files.
For me, the annotation UI should mimic the UI of the traditional image editing software like Adobe Photoshop. You have menus on the top, tools on the left, working are in the middle and properties on the right. Well, it may be boring or not modern but it is familiar and intuitive.
Securing Annotated Images
We deal with scanned financial documents that can contain highly sensitive information like names, addresses, some times credit card details, account numbers or even social security numbers. Some of our customers would like to have a tool that allows them to annotate medical images like x-rays – those images can also contain personal information in their metadata (if, for example DICOM format is used).
Unless the annotation tool is a standalone tool that people can install on their local machine, using secure HTTPS is a no-brainer and the least you can do from security point of view (surprisingly some of the SaaS services lacked even here). However, security goes far beyond that. Things that should be added are:
- Encrypting the storage where the annotated images are stored. Hosted or self-managed keys should be allowed.
- Proper authentication mechanisms should be added. Multi-Factor-Authentication should be used for higher security.
- Good Role Based Access Control (RBAC) should be implemented. For example some people should be able to just view the annotated images, while others to annotate and edit those.
- Change logs should be kept as part of the application. For example, will be important to know who certain annotation and whether it was correct or not.
Scalability for Image Annotators
A good dataset can contain hundreds of thousands of images – the COCO dataset for 2017 has 118K images in its training dataset. Depending on the quality of the images, the storage needed to store those can vary from 10s of GB to 100s of GB to PB and more. Having an ability to grow the storage is essential to the success of an image annotation tool.
On the users side, a dataset of hundred thousand images may require hundreds of people to annotate. Being able to support large userbase without a huge impact on the cost is also important (hence, the user-based licensing may not be the best option for such a SaaS offering because a single user may annotate only 2-3 images from the whole dataset).
The back-end or APIs handling the annotation of the images should also be able to scale to the number of images and users without problems.
Versatile Export Options for Annotated Images
Rarely the image annotation tool is tightly coupled with the machine learning system that will use the images. Also, the same annotations can be used by various teams using different systems to create the machine learning models. Clear explanation of the format used to store the annotations is a must-have but also the ability to export the annotations in common formats will be essential for the success and usefulness of the tool.
The word “export” hear may mean different things. It doesn’t always need to be download the images and annotations in the desired format but simply saving the annotations in this format.
I would start with defining a versatile format for storing the image annotations and then offer different “export” options, whether for download or just conversion in the original storage.
Collaborating While Annotating Images
Having a single person create an image dataset with hundreds of thousands of images is unrealistic. Such a task requires the collaboration of many people who can be spread around the world. Having the ability to not only give them access to annotate the images but also to comment and give suggestions to already existing annotations is a feature that should be high on the priority list.
Annotations, like software, are not free of bugs. Hence, the image annotation tool should allow for collaboration similar to what modern software development tools enable. This may not be V1 feature but should certainly come soon after.
Now, that I have a good idea what we would like to have from an image annotation tool, it is time to think of how to implement one that incorporates the above mentioned functionality. In the next post, I will look at what we would like to annotate and how to approach the data model for annotations.