Word Embeddings & Geometric Deep Learning for Network Analysis

Word Embeddings & Geometric Deep Learning for Network Analysis

Deep Learning for Network Analysis – I’m talking about Word Embeddings and Geometric Deep Learning for Social Network Analysis. First, let me thank you, the user organizers, for setting up the conference online despite Coronavirus crisis, so let’s get in.

First, just an overview of what I’m gonna talk about; background, model. Then, now, the background is Word Embeddings as we know, it’s a representation of words as vectors in a higher dimensional space.

As an application, Kozlowiski et al quarters have done something recently, two years ago. My extension is primarily focused on social network analysis, but it can also be extended to other fields.

The interesting part is that some features of the social networks, for instance, hashtags, word association rules and the network structure of users are not fully exploited, and this is what I’m trying to fill, the gap I’m trying to fill with this new project.

So, let’s give you an overview of the model. The model has inputs, large amount of texts, users, tags, likes and friendships or follows on social networks. And from the inputs, three parts are developed.

First, a Neural Network Word Embeddings, which is something not new, then, Hierarchical Structure of Interest and Categories, hierarchical in the sense that there is a sort of hierarchy on the words, then, a Social Network Structure of Friendship and Interest.

And then this, the second and third part of combining to create a salience, this, it’s a new idea that gives the idea of diffusion and frequency of a word in a space. By combining all those parts, we get to the model. And the model cornerstone has two functions, one for a word, and the other for users.

Model output is a projection of words, and their salience, and a network of users in our unique Word Embedded space. This is what is really new in my project. Here we can have an overview of the model in a nutshell. Now, in addition to what I said, also, two additional function enter into the model, one for filtering and the other for mapping.

Map, in particular to map the users into the Word Embeddings place. And, in the first trial we’ll use item using the library reticulate, and at the end, the output will be our Word Embedding space, with salience, this new feature, and network of users.

Insane space, this is a novelty. And their model could be applied dynamically to evaluate the evolution of structure of words and users over time, or something about application on Facebook.

There will be a limited sample on Twitter, and also it will be interesting to compare, the social networks and something about, as an economist, there’s an application involving the European Central Bank Speeches Database, to analyze the evolution of the economic culture of words, and central bankers policymakers or physical users in the last 20 years.

So, thank you very much, there is a Github repository. Now at the moment it’s closed, but it’s private, but we will be open in September, and, thank you very much, and, please comments me if you have any questions.

Share this post ...

Leave a Reply

Your email address will not be published. Required fields are marked *