# ImageNet

[ImageNet](http://image-net.org/) is formally a project aimed at (manually) labeling and categorizing images into almost 22,000 separate object categories for the purpose of computer vision research.

However, when we hear the term *“ImageNet”* in the context of deep learning and Convolutional Neural Networks, we are likely referring to the *ImageNet Large Scale Visual Recognition Challenge*, or ILSVRC for short.

The goal of this image classification challenge is to train a model that can correctly classify an input image into 1,000 separate object categories.

Models are trained on \~1.2 million training images with another 50,000 images for validation and 100,000 images for testing.

These 1,000 image categories represent object classes that we encounter in our day-to-day lives, such as species of dogs, cats, various household objects, vehicle types, and much more. You can find the full list of object categories in the ILSVRC challenge [here](http://image-net.org/challenges/LSVRC/2014/browse-synsets).

When it comes to image classification, the ImageNet challenge is the *de facto* benchmark for computer vision classification algorithms — and the leaderboard for this challenge has been ***dominated*** by Convolutional Neural Networks and deep learning techniques since 2012.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://alfredo-reyes-montero.gitbook.io/keras/chapter1/vgg-neural-network.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
