Analisis Dugaan Pelanggaran Hukum Anti Monopoli Oleh Google Di Indonesia

Google Inc. yang diduga telah melakukan pelanggaran hukum anti monopoli di Indonesia menurut KPPU pada 15 September. Google yang memasuki kategori hectocorn yaitu perusahaan startup yang memiliki…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




How Computers are Better at Math than you

Math is hard.

I find this the case, and many of you do as well. From basic algebra to calculus and beyond, many different fields of mathematics can be difficult to grasp.

Finding the roots of a basic quadratic equation, for example, based on the Ontario curriculum is meant to take a whole semester, or 4 months to grasp for grade 10 students, and even then they can be expected to take up to 2 or even 5 minutes to solve one of these basic quadratic equations.

For this reason, many of us have computers do the math for us. Rather than manually solving for the roots of a quadratic, we might instead graph it on our trusty TI 84 or even if permitted use Desmos, all within 30 seconds or less.

And this is just one of countless examples. Let’s face it, this hunk of metal can preform calculations, draw a graph or whatever else you would it to do far better than both of us; it can do it much faster too.

But how do these systems work? How can a large hunk of metal and plastic be far better at performing math calculations than the most complex structure to exist — our brain? Well, the answer lies in programming, algorithms, and of course, in the 21st century, a touch of neural networks, and machine learning.

Before continuing, this article assumes math knowledge at least up to the high school standard. Without further ado, how does that handy TI — 84 graph the function e²ˢⁱⁿ ˣ (something that would probably take you many minutes) before you can even blink?

Firstly, lets explore how Desmos graphs practically anything you throw at it. The key to finding roots using a calculator is not too difficult to implement yourself and is simply teaching a computer how to find the roots graphically of an equation.

The steps for finding roots graphically are as follows:

While this could be hard to comprehend, it can be helpful to imagine the equation itself as a balance with each side as a total weight and the variable as a manner in which the weight can be adjusted. Both sides must be perfectly in balance for the balance to stay flat and, thus, for the equation to be true.

By graphing for all values of the variable that is being solved, we are essentially testing all adjustments on the weights until both sides are equal, or the two graphs intersect, which is what our adjustment must be for the weights to be equal and thus the value/values of the variable.

Now lets have a computer do it. Using this principle, for computers to do this we can do as follows:

This is essentially how calculators such as the TI 84 graph functions, and also, how I solved many of the questions on my last math test!

But what if we wanted to go a little bit further and find a way that didn’t involve graphing? What if we wanted the computer itself to return the answer of the equation to us by itself, rather than having to trace a graph? What if we wanted to do more than just find roots (e.g. fully factor a polynomial or even prove some theorem) This is where Computer Algebra Systems come in.

A Computer Algebra System, or CAS for short, is any software that can manipulate mathematical expressions (e.g 3x⁵−52x³) like humans doing math manually. For example, factoring the equation above, as such:

is possible using a Computer Algebra System. This is done using many complex algorithms for each task that is required.

You can think of this as a kitchen where each appliance or utensil serves a different purpose, but each is essential to making dinner. For example, while making rice, a rice cooker is ideal.

A more suitable but also non-trivial example would be finding square roots. As a human, to find the square root of a number ( without a calculator, of course), you would simply keep trying values until one works, approximating closer and closer each time to the actual result. Contrarily, a calculator/computer cannot do this because it would take too much time for abnormal or large squares (trying to perform millions of operations just to get one square root).

An algorithm that solves this would be Newton’s method which is merely a more sophisticated method of approximation that works backwards from the function and its location.

Given a function and a point on the function (some initial guess), one can work backwards using the idea that the point changes at infinitesimal intervals in relation to the function’s derivative. This is using:

Where x₀ is an initial guess close to the x-axis (e.g. 1), and x₁ is a closer approximation of the root. f(x) is the function whose roots are trying to be found, and f’(x) is the derivative of the aforementioned function. The formula above can be repeated an infinite number of times to get a more accurate root.

Think of it like measuring the distance between where a ball lands from falling off a ramp and a wall.

The ball rolls until it hits the ground (assume the ball is magical and does not roll further or bounce), where you can measure its position from the wall, just like finding the root of a function after the difference between iterations of the newtons method is minimal.

The function is cut off by the x-axis, just like the ramp cannot continue under the ground, and thus the ball cannot go underground. Think of the many iterations performed as the seconds the ball takes to roll down.

Now, how does this relate to finding a square root? It relates quite a bit. Newtons method can find roots, i.e. x-intercepts of a function.

As a human, one can instinctively square a number (i.e. multiply it by itself) to test if it is a square root, which through a function can be modelled through f(x) = x² where x is the number being tested as a square root.

If we are trying to find the square root of n, x² = n, then all we are trying to do is solve for (the positive root of) x. Functions can be transformed predictably. Namely, in a quadratic whose purpose is modelling square numbers such as x², x²— h would shift the parent function f(x) = x² by h units to the right (positive direction).

Thus using the function f(x) = x − h where h is the number whose square root we are trying to find and the newtons method, one can see the square root of h by performing until the change in xₙ and xₙ₋₁ is less than some minimal amount for each new value xₙ, thus giving us the symbolic representation …

Where 2xₙ₋₁ is the derivative of f(x) and h is the number, inputted by the user in the case of a program, whose number. x can be mostly any guess (with some exceptions), such as 1 or 3.

Using this complicated for us, yet menial algorithm for a computer that can perform thousands of such guesses in succession, a computer can approximate thousands of square roots before I can even come up with the square root of 64.

Finding a square root might be trivial compared to how my calculator could solve a three-system equation in the blink of an eye. So then, how do these more complex computer algebra systems work? Well, it is the same concept as above: using the implementation of an algorithm to solve it logically (complex for humans but not for computers) for each function of that algebra system and applying them where needed.

Take your calculator, for instance. When you press the square root button, your calculator knows to and performs the function needed to add numbers, which uses its own algorithm, while when you press the log button, the same is applicable but with a separate algorithm. Maxima, developed by MIT, is a spotlight example of a complex computer algebra system used by many, developed in the 1960s.

“Maxima is a system for the manipulation of symbolic and numerical expressions, including differentiation, integration, Taylor series, Laplace transforms, ordinary differential equations, systems of linear equations, polynomials, sets, lists, vectors, matrices and tensors. Maxima yields high-precision numerical results by using exact fractions, arbitrary-precision integers and variable-precision floating-point numbers. Maxima can plot functions and data in two and three dimensions.”

All of these algorithms come together (in the src file in this case) to form Maxima, which is comparable to a filled kitchen cupboard, filled with all the tools necessary to make dinner or in this case, finish your math homework.

Websites like Wolfram Alpha work similarly, containing all the tools required to solve everything you throw at it successfully. But how do tools such as Symbolab work where even a picture of the math is enough? This is where machine learning and, more specifically, neural networks play into this. So then, what exactly is a Neural Network?

Imagine yourself as a budding software developer for a moment, who has recently picked up a client with a peculiar problem. This client is a bird watcher and has recently gathered around 1000 pictures of birds of the same resolution, and wants to print only 100 of the most high-quality ones to frame up in his house.

Now for an esteemed developer like yourself, this is trivial. All you would need to take all the files of the pictures and sort them by file size using whatever sorting algorithm which is usually provided by the language.

Neither you nor the bird watcher, who is completely oblivious to even the steps above, care at all about how this sorting algorithm is implemented, hence it being a black box.

Now that the bird watcher has these high-quality images on their wall, he wants to know which bird is which type.

He got the pictures from an experienced acquaintance who only took pictures of blue jays and hummingbirds, and as someone who wants to take more pictures of the same, he asks you to make a program that distinguishes between the two.

Now for a human, it may be obvious; you can just look at the colour but, for a computer, which is incapable of looking at and only has access to the pictures as a series of bits, it can be a pretty hard problem.

Undeterred, you decide to go back to your black box and look at both the input, which would unsurprisingly be a list of each pixel in the file and the output of either a blue jay or a hummingbird.

You decide that you want the program to teach itself the difference between a hummingbird and a bluejay and come up with a method in which to transform the pixels from the file into either a 1 or a 0, with a 1 being the hummingbird or a 0 the bluejay.

This method would be multiplying the numerical RGB representations of each pixel by arbitrary weights.There would be multiple layers with each being for different purposes such as recognizing the bird, the beak of the bird, etc — or so you think.

While you know these weights can be and mostly are arbitrary numbers, it is useful to think of each set of weights as unmasking and restricting certain parts of the bird through each layer of weights.

Now if you wanted the program to actually learn on its own you would need a way of updating the weights based on the correctness of the program. Let’s say you designed a performance function that simply compares the result of either a humming bird or a blue jay with the filename. If the performance function determines the answer is correct, you do nothing to the weights, but if its incorrect, you set each of the weights to random values.

A few months later, after many many tests, you finally have trained the program to near-perfect accuracy and the bird watcher can continue taking his pictures. What you came up with is what is most commonly referred to as a neural network.

A neural network typically referred to as a model is simply another program that takes in inputs and spits out a result. In this sense, there is no difference between a neural network and a traditional computer program, or a function which can be modeled simply as a black box.

A neural network typically contains weights corresponding to each input value. For each input value, their weighted sum or the value of the product of each value and its corresponding weight is computed. This happens throughout each of the layers of the neural network until the final layer or the result layer is reached whereby a result is established, which is then compared to the actual value of the computation, for correctness.

While it is helpful to think of each layer in a neural network as preforming some sort of task related to the goal of the program (ie. in the case of the bird sorter, finding features of the bird) this is actually not the case and most of the time, many of the values of the weights are in fact arbitrary.

This is why the hidden layers themselves of the neural network can be thought of as a black box (for our purposes); we do not actually care how the model is learning, we just care about if it is, how fast it is, and the final working result of the model.

Finally, the weights are updated based on the neural network’s performance, (using back-propagation of course) which is how neural networks learn on their own and improve over time seen in the function that checks if the bird was correctly identified.

This fundamental model of a neural network was initially conceived by Arthur Samuel in 1962 in his essay, models the basic principle behind most if not all neural networks used in the past and today including in math.

Based on this definition, one can conceivably imagine how something like image recognition could work: all the pixels are taken as input values and passed through x amount of image layers to arrive at a final result of the classification of the image.

Picture yourself as Hugh, an 11th-grade math student who is currently struggling with his homework. You see, his math teacher precariously gave their students the task of finding the value of x in the equation 4x²+2x+1 = 0 without teaching them about complex roots.

Confused by the square root of a negative number, Hugh turns to the trusty Symbolab app on his phone to teach him how to solve the equation.After scanning the problem using the camera provided in the Symbolab app, Hugh is provided with a step-by-step detailed solution, giving him the details on complex roots and how to solve with complex numbers, making his teacher proud of him!

But how does this work? How did Symbolab recognize the math hidden behind the conventions of human writing and paper and then successfully solve it while enlightening Hugh through a step-by-step solution of the problem? The answer lies in what seems to be a convolutional neural network.

A convolutional neural network is a neural network that performs convolutions on each sequence of inputs receives. But what is a convolution? A convolution is simply the application of some filter to the input values whatever they may be.

This set of filters can be random values, similar to the parameters/weights that were previously described with the bird-watching neural network. Take the image of this 7 below for example. Let us say that we wanted to find the edges of the drawing.

This would happen until the end of the image, wherein the new sequence of pixels would be the completed convolution and then passed through the next layer for further filtration. At the end of this, and based on correctly adjusted weights which would be trained using some performance function, and a valid data set, one can successfully classify digits, identify edges, and more using a convolutional neural network. This is the basic idea behind CNNs.

In the case of Symbolab, a CNN most likely proves useful as the bulk of the preprocessing stage of the image, successfully breaking down the complex and intricate handwriting of the student on the paper into the actual math equation in computer form, identifying the various different aspects of the math equation.

This math equation is then finally passed over to a CAS system such as maxima or the like which then computes the result, for the user to see. The advantage of using a CAS in a scenario like this is the fact that the steps that it took can be traced back, due to the usage of individual functions for each operation. This can then be supplied to the user as a tutorial on how to solve such a problem. There, Symbolab, using a CNN to interpret the images provided as well as a CAS at its core, has successfully helped Hugh solve his math homework!

Computers are one of, if not the greatest invention and advancement made by humanity within the last century or so. In mathematics, advancements in computer science and machine learning, specifically through neural networks, have enabled us to not have to worry about calculations at all. I believe, and I hope you do too, that through tools such as Wolfram Alpha, Maxima, Desmos and Symbolab, computers are in fact, better at math than you.

Add a comment

Related posts:

Virtual network is a beautiful lie

Dating online may be an interesting experience with everything dealt with properly. If not, it will surely be rather boring. That’s why dating online should be made carefully rather than casually…

The Most Inspiring Heroes in Film and Television

There are many inspiring heroes in film and television who have touched the hearts of audiences around the world. Some examples include: Katniss Everdeen from “The Hunger Games” series — Katniss is a…

What is Chat GPT?

Chat GPT is a powerful AI tool that can be used for marketing and communication. It is based on the GPT-3 language model (Generative Pertained Transformer) created by Open AI that has been refined…