What Twitter & Facebook Teach Us About Machine Learning

Phone showing social networks that use  machine learning algorithms
“What Twitter and Facebook Can Teach Us About Machine Learning” was originally published with JaxEnter on Oct. 10, 2019. 

Tech giants Facebook and Twitter are experts at using machine learning. But their successes have come with some spectacular missteps.  Keep these important tips in mind to improve your business model. Make sure to remember the end-user experience and strive towards the best result.

Facebook and Twitter left most other companies around the world far behind when it comes to using machine learning to improve their business model. But their practices haven’t always resulted in the best reactions from end-users.  There’s much to learn from these companies on what to do–and what not to do–when it comes to scaling and applying data analytics.

Get the data you need first

It seems like Facebook uses machine learning for everything. The company uses it for content detection and content integrity. It’s used for sentiment analysis, speech recognition, and fraudulent account detection. Operating functions like facial recognition, language translation, and content search functions also run on machine learning. The Facebook algorithm manages all this while offloading some computation to edge devices to reduce latency.

Offloading allows users with older mobile devices (more than half of the global market) to access the platform faster. This is an excellent tactic for legacy systems with limited computing power. Legacy systems can use the cloud to handle the torrent of data. ntroducing accessible real-world metadata can improve cloud-based systems through customization, correction, and contextualization.

Start by thinking about what data is really needed, and which of those datasets are most important. Then start small. Too often, teams get distracted in the rush to do it now and do it big.  But don’t confuse the real objective: do it right. Focus on modest efforts that work, then increase the application development to apply to more datasets or to adapt more quickly to changing parameters. Focus on early successes and scaling upwards. By doing this, you’ll avoid early failures caused by too much data too soon.Even if a failure does happen, the momentum of smaller successes will propel the project forward.

Automate training

Machine learning requires ongoing modification and training to remain fresh. Both Twitter and Facebook use Apache Airflow to automate training that keeps the platforms updated, sometimes on hourly cycles. The amount and speed of retraining will rely on computing costs and the availability of resources. However, ideal algorithm performance will rely on properly scheduled training for the dataset.

One of the biggest challenges may be choosing the type of learning to employ for the AI model. While deep learning methods have been the first choice for dealing with large datasets, it’s possible classic tri-training may create a strong baseline that will outperform deep learning, at least for neuro-linguistic programming. While tri-training cannot be fully automated, it may produce higher quality results through the use of diverse modules and democratic co-learning.

Pick the right platform

One of the challenges both Twitter and Facebook now face is trying to standardize their initially unstructured approach to building frameworks, pipelines, and platforms. Facebook now relies heavily on Pytorch and Twitter uses a mix of platforms, moving from Lua Torch to TensorFlow.

Look for a platform that will be scalable and think of the long-term needs of the company in order to successfully choose the right AI tool.

Don’t forget the end-user

A search for ‘machine learning’ and ‘Facebook’ together inevitably brings up hundreds of blog posts and articles on the negative feelings some users have about the AI feature built into the site. Loss of privacy, data mining, and targeted advertising are some of the less worrying accusations thrown at the company. And yet many of the same users appreciate other AI tools that allow them to connect to friends and family in other countries who do not speak their language and tools that keep the platform free from pornography and hate speech (if somewhat imperfectly.)

It was not the technology itself but the lack of transparency and how Facebook implemented machine learning on its platform that frustrated users and militarized some against it. Don’t make the same mistake. Trust and transparency should be keywords for all major decisions. End-users will appreciate it, and they will leave a well-designed site with the sense they have gained something from the interaction instead of feeling personally violated by it.

Read our longer blog post if you’re still asking “What is machine learning?”

AX Control can help you with your industrial automotive parts replacement needs. Talk to our team today.  We’re here to help!

What is Machine Learning?

What is Machine Learning, Simply?

Machine learning, or rather the idea machines can learn to ‘do’ without an explicit set of instructions (programming), has been the basis of many movies where humans end up getting the short end of the deal. But is machine learning truly that dire?

Unlikely. Machine learning, which is a subcategory of artificial intelligence, is simply a way for machines to imitate intelligent human behavior. It’s a type of data analysis that allows programs to learn via experience in order to complete complex tasks, much like humans problem-solve. This type of learning typically breaks down into two specific types: deep learning and reinforcement learning. But what’s the difference?

Deep Learning

Deep learning is essentially what you see in any young child as they start to understand that while chickens are birds, not all large birds are chickens. It is based upon the ability to classify both the common features (in this case: feathers, beaks, wings, etc) as well as the uncommon features that separate each grouping from each other (sound, size, feather pattern, beak length). This kind of hierarchical feature learning stacks multiple layers of learning nodes as observed data from one layer produces new outputs that are then fed to a higher level.

In deep learning, the machine begins with raw data that must then be sorted into relevant and irrelevant subsets. The machine, exposed to more data, improves over time. This is similiar to how a baby learns.

Reinforcement Learning

Meanwhile, reinforcement learning relies more on trying out slight variations of a problem. As results occur (favorable and unfavorable) data sets change until the best outcome emerges. This is reminiscent of “The Good Place” as Michael tries to create a better version of his neighborhood.

Reinforcement learning uses a closed-loop algorithm where each action receives feedback in a trial-in-error process until the best action is determined.

Continue reading “What is Machine Learning?”

How Model T Thinking Shapes 21st-Century Manufacturing

21st-century manufacturing is based in 20th century tech like the Model T.  Tourists at Gettysburg in a Ford Model T.
“Tourists in a Ford Model T at the ‘Devil’s Den’ at Gettysburg Battlefield in Pennsylvania, c1910-1915” by crackdog is marked under CC PDM 1.0. To view the terms, visit https://creativecommons.org/publicdomain/mark/1.0/

One of the greatest challenges for any successful business is knowing when it’s time to change.  After all, conventional wisdom says “if it’s not broke, don’t fix it.”   But with 21st-century manufacturing technology changing at such a rapid pace, those who stand still will soon be left behind. 

The last time the world saw technological advancements at this pace, Henry Ford was just figuring out the assembly line.  By looking back at Ford’s adoption of the new technology of his time we may be able to learn how to properly read today’s technological trends. This knowledge will help prepare us for investing in AI and automation at the most advantageous time for our manufacturing, warehousing, and distribution systems.  

Leverage Automation

Sectional view of an early Ford Engine.  21st-century manufacturing built upon 20th century ideas.
“In the Ford Model T, the transmission, magneto, and engine were mounted together as a unit, all lubricated by the same oil” by The Henry Ford is licensed under CC BY-NC-SA 2.0

Henry Ford was not a newcomer to the car business when he began producing the Model T in 1908.  Before starting the Ford Motor Company, Henry worked for several other automotive companies where he contributed to the creation of the Quadricycle and the Ford 999. But he dreamed of a vehicle for ‘the great multitude,’  and so the Model T was born. 

Unfortunately,  the original Model T was still too expensive for most Americans.  When Ford began churning the cars out via assembly line, however, their price dropped significantly. 

In 1909, workers were using traditional methods to piece cars together. That year, a Model T cost $825. Production was under 11,000 units. But in 1916, three years after Ford started using assembly line production,  the Ford Motor Company produced over half a million Model Ts. Each one sold for $345.

Continue reading “How Model T Thinking Shapes 21st-Century Manufacturing”