Gephi's Twitter Streaming Importer V2 is Out !

Important notes : The old version of the plugin is deprecated, the latest version will be the 1.4.4 and then won’t be updated anymore.

Why a V2 ?

The old version of the plugin is using the Twitter Streaming API v1 which is currently getting deprecated by twitter, with the consequence that most of the new users of the plugin are getting the famous “HTTP 403” error and can’t get the plugin working.

The Twitter Streaming Importer V2 is now using the new Twitter API v2. You still need to have a developer account and an application that can use the V2 version of the API (that should be the nominal case now).

What changes ?

Bearer Token

The old V1 and the new V2 API are slightly different, so you will need to reconfigure the credentials configuration inside the plugin. Instead of the Access API / Token set of credentials, you now only need the Bearer Token that you can generate and get on your Twitter application account.

Query Rules

The “query” is now fully handled by Twitter. The .json file to save your query won’t be backported as there is a fundamentally different approach now to querying the stream on twitter api. Please read how to create rules on the official twitter documentation about filtered streams .

This new way of building rules has multiple advantages, The rules are saved and bound to your application / Bearer Token, which means it will stay if you close Gephi and re-open it.

You can add and remove rules without restarting the running stream. You can have multiple rules. These rules can be flagged with a ‘tag’. The plugin will use these rules tags to create new columns on the nodes so you can check what rules the entity is matching.

Again, as this mechanism is purely controlled by twitter api, please read the official doc for more detailed information.

Other details

The api v2 has also changed how twitter retrieves the information. Moreover the plugin has to migrate from using Twitter4J library to official twitter-api-java-sdk ( ) .

These changes implied rewriting the networklogic to support the new way data is getting gathered. Fortunately it was not that hard to do the rewriting and was also a chance to review some of the logic to fix some bugs. The networklogic should react mostly the same way as on the old version.

During the rewriting, some minor optimisation has been done, notably the issue that the creation of the entities shouldn’t be now too much behind when using Force Atlas.