-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vehicle License Number Plate Detection using YOLOv7 model #633
Comments
Thanks for creating the issue |
I am a GSSOC '24 Contributor I'd love to work on this issue since I have prior knowledge of working on YOLO models with Face Detection algorithms. This would be practically helpful for the Government too for traffic surveillance in case of non-adherence to traffic rules on roads and highways. |
The issue has been assigned. |
@invigorzz313 Sure ma'am! Thankyou so much! |
Level labels are put on the PRs after reviewing it. |
I want to contribute to this. Please assign it to me. |
Problem: Traditational CNN models are much slower than YOLOv7 due their complex multi-stage pipeline
Solution: YOLOv7 is 509% faster and has 2% higher accuracy than Mask-R CNN Model
It requires several times cheaper hardware than other neural networks and can be trained much faster on small datasets without any pre-trained weights.
Approach: We will perform Transfer Learning and fine-tune our CNN (convolutional neural network)/ deep learning model using the yolov7 architecture. Then after performing OCR text detection and extraction, the model will be able to predict vehicle number by taking video and image input.
Further, we can make this take real time video input since YOLO models because of their high speed can be easily used for real-time object detection which can be implemented in practical scenarios such as traffic surveillance systems in case of non-adherence to traffic rules on roads and highways by the Government, whereas CNN models would lag in such a case.
The text was updated successfully, but these errors were encountered: