Since the dawn of civilization, the world’s greatest minds have been spent their lives seeking the answer to the question:
How do I win at back alley dice?
Ok, so that’s obviously not true. This question does hint at one that actually has plagued humanity for thousands of years:
What is going to happen?
Think about it. Pharaohs, kings, magistrates, tribal chieftains, and superstitious everyday people have sought out counsel from wisemen, soothsayers, oracles, prophets and prophetesses, priests and priestesses, palm/tarot readers, fortune tellers, and phoneline psychics to get the answer to this question. …
So far, we’ve covered three out of the four Pillars of Object-Oriented Programming: Inheritance, Abstraction, and Polymorphism. Today, we’ll be wrapping this series up with the final pillar: Encapsulation (and no, I did not plan the series out to set up that pun; It’s just a happy accident). But first, a brief recap!
If you remember, Inheritance is the ability of a class with hierarchal relationship (Parent-Child classes) to pass on data, variables, and methods to their subclasses. If
Animal class has a method called
speak(), then any child class of
Animal (such as
Fox) will inherit that method.
Last week, we discussed Abstraction, which is the ability for a developer to only provide the details necessary to use the various classes and their methods without exposing them to the underlying code. This provides protection to your source code, as well as saving the user time since they don’t have to reinvent the wheel whenever they want to use your methods, classes, packages, etc. It’s a pretty simple concept!
We also discussed abstract classes, which are classes that have at least one undefined method. In Python, you first need to import
ABC (acronym for Abstract Base Class) and
Last time, we spoke at length about the principle of Inheritance in Object-Oriented Programming. Inheritance is a trait in OOP languages that allows a parent/superclass to pass down attributes, data, and methods to their child/subclasses. A major benefit of this is that developers can cut down on redundant code because they do not need to explicitly program a method over and over across many classes that have a hierarchal relationship (parent-child) since the child classes will have already inherited those methods and can pass them implicitly.
Inheritance also allows for method overriding, where a child class shares a method with…
Update 3/24/2021: I wanted to take a moment to thank Florian Salihovic for helping me identify areas that needed clarification.
Today, we’ll be discussing Inheritance, one of the “Four Pillars of Object-Oriented Programming”. The programming language we’ll be using in our discussion is Python. I should preface that I have a bias for Python. Why? Because the syntax is simpler. That’s it.
I’m taking a much needed break from talking about Computer Vision to address a misconception about RAM that still get’s passed around today.
I remember when I was younger I would hear this axiom:
“If your computer is slow, just add more RAM!”- Almost every sales associate at MicroCenter or Best Buy
This lead to a general belief among the masses that you could keep your 10 year-old Dell running and relevant by JUST increasing the RAM. This is not true. Adding more RAM does not make a faster computer. …
Over the last six weeks, we’ve learned a lot about how we teach a model to detect images. We started with the basics of how a computer reads and stores an image. Then transitioned to how an algorithm defines objects in an image by use of Haar-like features. We discussed how the Integral Image reduces the time and necessary resources to calculate the hundreds of thousands of possible features in an image. From there, we talked about how models are trained, both on an intuitional level and on a slightly more mathematical level, especially as it related to Adaboost. Finally…
Last time, we talked about the intuition behind training an image classifier. We discussed its similarity to how infants learn to recognize objects and associate names/labels with them as they develop their ability to speak. We then applied that intuition to better understand how the Viola-Jones algorithm was trained. One thing that we haven’t talked about, however, was the math involved during this particular stage. Let’s take a brief look at that! If you’re scared, don’t worry too much. We’re simply taking what we’ve already discussed and seeing how the algorithm creates a trained model using only numbers!
Up until now we’ve discussed how computers “see” images, how algorithms detect objects within an image, and an ingenious shortcut utilized by Paul Viola and Michael Jones to greatly reduce the amount of time and processing power needed to train and run their algorithm 20 years ago. What we haven’t talked about is how algorithms learn to differentiate between objects. After all, an image to a computer is nothing more than an array of numbers. How is it able to tell that one section of numbers is a face and another section is the background? …
Last time, we discussed Haar-like features, what they are, how they help the Viola-Jones algorithm detect objects, and how they determine if they’ve found a feature. We covered a lot of information and if you’ve never heard of Haar-like features before, I highly recommend reading the previous article, as any summary I give will not be enough to give you the foundation needed to understand our topic today: The Integral Image.
To really underscore the importance of the Integral Image, let’s talk a little bit about…