Drop an image
The objective of this deep learning microservice is to detect if an image is safe or not-safe-for-work (NSFW). The service recognizes 2 main categories: safe and nsfw. In each of these categories, sub-categories are also recognized. For the category nsfw, the following sub-categories are available: suggestive, nudity and porn, associated to increasing degrees of offensive sexual contents. For the category safe, the system can distinguish between general for images such as landscape, objects or animals and person when a person (or a body part) is present in the image. Finally, safe or nsfw cartoons are also detected by the service, meaning there is a sub-category cartoon for both categories.
The output of the service is the predicted category, predicted subcategory and the probabilities for all categories and sub-categories. The specification file is defined according to openapi v3 (OAS3).
This microservice employs a large deep neural network that was obtained with a transfer learning strategy from the pre-trained network MobileNetV2. This model was fine-tuned with 4'000 images per sub-category. More information about the data set and training procedures can be obtained by contacting us at info@icosys.ch