91福利

You are now in the main content area

New research to help Government of Canada fight disinformation on social media

Two alumni reunite with their professor to develop machine learning tools for National Defence
By: Clara Wong
November 22, 2021
Pawel Pralat, Andrei Betlen and David Miller

Left to right: Pawel Pralat, Andrei Betlen and David Miller

Like all computer science students, mathematics classes were standard fare for alumni David Miller (Computer Science 鈥16) and Andrei Betlen (Computer Science 鈥17). Neither could have imagined that years later, they鈥檇 reconnect with their former math professor, Pawel Pralat, on a problem of national importance.

This time, Betlen and Miller are co-founders of data analysis software start-up . Since 2018, they鈥檝e been tackling defence and security challenges under a .

The pair recently pitched a solution to detect hostile influencers who organize campaigns to spread disinformation on social media. Now, they need the mathematics to finish building it. They鈥檝e turned to Pralat, one of the world鈥檚 leading experts in network science.

Together, they鈥檙e creating first-ever hybrid algorithms that combine two distinct machine learning approaches into a single super-tool 鈥 one that simultaneously analyzes user content and underlying structure of social networks. The research expands on Pralat鈥檚 prior work in a growing area of data science: graph embeddings.

Ferreting out bots, cyborgs and hostile actors
Correct, incorrect and deliberately misleading information flow freely on social media platforms. Far less obvious is that certain content 鈥 sometimes sizable amounts 鈥 are created by 鈥檋ostile actors鈥 masquerading as legitimate users.

Behind the curtains may be foreign states, criminals or terrorists. Their goal: manipulate genuine users to advance a particular cause 鈥 sway elections, introduce ideologies, undermine public confidence, and so forth.

Their tools of choice: bots (internet robots) and cyborgs (hybrid accounts jointly managed by humans and bots). These automated super-spreaders can propagate ideas faster than humans can consume or properly evaluate them. Distorted perceptions of reality can quickly result.

Patagona Technologies鈥 product, when completed, will fortify the government鈥檚 ability to detect such nefarious activity and counteract disinformation campaigns that threaten national security.

鈥淚t鈥檚 a very important problem, especially for Canada鈥檚 peacekeeping and humanitarian missions,鈥 says Betlen. 鈥淲e鈥檙e providing new tools and technical capabilities to help limit the ability of adversarial nation states and terrorist organizations to disrupt and undermine these missions through social media manipulation.鈥

Global network connection. World map point and line composition concept of global business. Vector Illustration

An illustration of an embedding of social media users: similar users are embedded close to each other.

New solution: User content + network structure
The main challenge is to develop machine learning algorithms that can differentiate artificial social media accounts from ordinary ones.

Patagona Technologies鈥 innovation is the first of its kind to do so by integrating two standalone techniques into a single tool. One searches for suspicious content (posts, comments, hashtags); the other, for abnormal network structure (connections, communities, interactions, complex social manoeuvres, etc.).

Embeddings are an integral part of the framework. The technique takes large, complex datasets 鈥 such as words in content, or users and connections 鈥 and converts them into vectors for easier handling by machine learning models.

Pralat is working with a large team of students and researchers, including Ryerson mathematics PhD student Ash Dehghan and SGH Warsaw School of Economics professor on the graph embeddings side.

Pralat elaborates: 鈥淏ots and cyborgs behave differently from a network science point of view. The structure around them is different. With graph embeddings, we can assign coordinates to the data and find patterns in how bots operate, regardless of subject matter. Right now, we鈥檙e still conducting experiments 鈥 including a fun one using data from Marvel comic books to identify villains among other characters.鈥

Once the role of social media influencers is clearly defined, the framework will include a classifier that analyzes users and predicts hostile actors.

The Future Ahead
The project runs through to next fall. Meanwhile, Betlen and Miller will work with government military experts on feedback for the platform 鈥 exciting prospects for two young graduates competing with larger, established firms in the tight Canadian defence space.

鈥婤eyond national security, the new platform could also be used in non-governmental contexts. Patagona Technologies is already talking with researchers and organizations that study domestic disinformation about using the tools to harvest insights that could inform Canadian public policy.

Miller reflects on the path he and Betlen have taken since their Ryerson days: 鈥淲e definitely wouldn鈥檛 be here now without the education we got from Ryerson, including experience with start-ups from the Zone Learning ecosystem. Andrei and I have the freedom to explore and go after problems that are incredibly relevant and interesting to us. If we鈥檙e still hacking around together in another ten years, that would be great. But we鈥檙e very happy with where we鈥檙e at right now.鈥  

Share This