If you're anything like me, you're constantly bookmarking tweets that you want to come back to later. But when the time comes to actually find that one particular tweet, it's nearly impossible to do so. Wouldn't it be great if there was a way to automatically organize all of your Twitter bookmarks?
Well, good news! There is a way, and this guide will walk you through the entire process, from getting an auth token to fetching your bookmarks to adding them to Notion with their API. Plus, you can even set up a periodic SaaS to save all of your future Twitter bookmarks. So what are you waiting for?
Read on to learn how to get your very own Twitter bookmark organizer up and running!
Here's how you can create your own automatic Twitter bookmark organizer in Notion with the Twitter API, GPT-3 API, and Notion API.
Steps
1. Get Twitter developer API credentials
2. Get OAuth 2.0 Twitter API token
3. Fetch Twitter bookmarks
4. Get bookmark keywords with GPT-3 API (optional)
5. Get Notion API keys
6. Create a Notion table and connect with API
7. Add bookmarks to Notion
8. Refresh your auth token
Dependencies
Here's what we'll be using:
1. Python 🐍
2. Twitter API 🐦
3. Notion API 📝
4. GPT-3 API 🤖 (optional)
Let's get going.
1. Get Twitter developer API credentials
The first step is to get your app's client id and client secret from your Twitter developer account.
Go to the dashboard of your Twitter developer account, click + Add App, choose Create new, pick an environment and then choose a name.
Save the bearer token revealed at the last stage. You won't need it in this guide, but having it for other purposes is good. Like if you want to support fellow indie makers you're following that are launching on Product Hunt.
Now that you have an app, it's time to set up user authentication. You'll need this to call certain endpoints, allowing your app to make specific requests for authenticated users.
Click on Set up below the newly created app, keep Read and choose Web App, Automated App or Bot because you'll need the confidential client:
Add the callback URL and website URL for your app. The callback/redirect URL is the destination that OAuth is allowed to redirect after the authentication process. You'll need this URL in the coming steps.
Make sure to save the client id and the client secret revealed. We'll use them in the coming steps when making API calls to get your bookmarks later.
Now your user authentication settings should be set up:
Let's use these credentials in the next step, where we'll get the auth token for your Twitter app.
2. Get OAuth 2.0 Twitter API token
In this part, where we'll generate a token using OAuth 2.0 Authorization Code Flow with PKCE, I'm using code from this great repo: OAuth 2.0 Authorization Code Flow with PKCE
Start by importing all the libraries:
import base64
import datetime
import hashlib
import json
import os
import re
import requests
from time import sleep
from requests.auth import AuthBase, HTTPBasicAuth
from requests_oauthlib import OAuth2Session
Then add your Twitter credentials, and use the same redirect URL as you added in the previous step:
client_id = "YOUR_CLIENT_ID"
client_secret = "YOUR_CLIENT_SECRET"
redirect_uri = "YOUR_REDIRECT_URL"
Set up permission scope:
# Set the scopes
# offline.access makes it possible to fetch
# a new refresh token when the access token have expired
scopes = ["bookmark.read", "tweet.read", "users.read", "offline.access"]
The access token we're trying to generate is only valid for 2 hours. So if you want to fetch your Twitter bookmarks regularly, you'll need to obtain a new access token every time you want to call the endpoint.
offline.access
makes it possible to fetch a new refresh token without prompting a new login session. When the access token has expired, you can fetch a new one with the refresh token. The refresh token is valid for 6 months.
Then create a code verifier:
code_verifier = base64.urlsafe_b64encode(os.urandom(30)).decode("utf-8")
code_verifier = re.sub("[^a-zA-Z0-9]+", "", code_verifier)
Create a code challenge:
code_challenge = hashlib.sha256(code_verifier.encode("utf-8")).digest()
code_challenge = base64.urlsafe_b64encode(code_challenge).decode("utf-8")
code_challenge = code_challenge.replace("=", "")
Finally, start the OAuth 2.0 session:
oauth = OAuth2Session(client_id, redirect_uri=redirect_uri, scope=scopes)
Now it's time to create an authorize URL that you'll visit to allow your app to make requests on behalf of yourself:
auth_url = "https://twitter.com/i/oauth2/authorize"
authorization_url, state = oauth.authorization_url(
auth_url, code_challenge=code_challenge, code_challenge_method="S256"
)
Now visit the URL you received from the variable authorization_url
to authorize your app to make requests on your behalf.
You'll be redirected to a new URL when you have authorized your app. Copy that URL in a new variable:
authorization_response = "THE_URL_YOU_GOT_REDIRECTED_TO_AFTER_AUTHORIZATION"
Now we can fetch our access token:
token_url = "https://api.twitter.com/2/oauth2/token"
auth = HTTPBasicAuth(client_id, client_secret)
token = oauth.fetch_token(
token_url=token_url,
authorization_response=authorization_response,
auth=auth,
client_id=client_id,
include_client_id=True,
code_verifier=code_verifier,
)
If we inspect the token dict, you'll see both the access token you need for the API call to get your bookmarks but also the refresh token you'll need to generate a new access token once it is expired:
The next step is to fetch your bookmarks.
3. Fetch Twitter bookmarks
Before we call the bookmark API endpoint and fetch your bookmarks, we'll need your numerical user id. Fetch your numerical user id with this endpoint:
user_me = requests.get(
"https://api.twitter.com/2/users/me",
headers={"Authorization": f"Bearer {access_token}"},
).json()
user_id = user_me["data"]["id"]
Now we're ready to call the bookmark API endpoint and fetch your bookmarks:
url = f"https://api.twitter.com/2/users/{user_id}/bookmarks"
headers = {
"Authorization": f"Bearer {access_token}",
}
response = requests.get(url, headers=headers, params={
'tweet.fields':'author_id,created_at',
'expansions':'author_id',
'user.fields':'username',
})
tweets = response.json()['data']
This API call will fetch 100 bookmarks. To to paginate to the next 100 ones, you'll need to include the pagination_token
that we received in the response
. This guide will cover how to store the 100 latest ones.
The last part of this section is to include the username and name of each Twitter user with each tweet. The Twitter API response returns the tweets and the user data in two different keys:
Create a mapping dict for user data:
user_mapping = {user['id']:user for user in response.json()['includes']['users']}
Then combine bookmarks with user data:
for tweet in tweets:
tweet.update({
'name':user_mapping[tweet['author_id']]['name'],
'username': user_mapping[tweet['author_id']]['username']
})
So far, we have generated an access token, refresh token and your 100 latest bookmarks combined with user data.
The following section is optional; we'll use GPT-3 to get 5 keywords for each bookmark.
4. Get bookmark keywords with GPT-3 API (optional)
This section is optional and requires you to have access to Open AI GPT-3 API keys. We'll let GPT-3 give us 5 keywords for each Twitter bookmark. That way, it'll be a breeze once you're searching for a specific tweet you bookmarked, and you'll be able to organize the tweets according to keywords.
First, get your API key, go to https://beta.openai.com/, log in and click on your avatar and View API keys:
Then create a new secret key and save it for the request:
For the endpoint text-davinci-002, we'll need a prompt that asks the endpoint to give us 5 keywords related to the tweet. Feel free to play around with this prompt. Here's the one I'm using:
prompt = f"Here is a tweet, give me 5 keywords, each keyword on a new line, that describe what the tweet is about \n\n --- tweet start ---- \n\n {tweet['text']} \n\n --- tweet end ---:"
Now we can make a call to the Davinci endpoint for each tweet in our list and expand each dict with a new key; Keywords:
import openai
api_key ="YOUR_OPEN_AI_GPT3_API_KEY"
for i, tweet in enumerate(tweets, start=1):
print(f"Processing {i}/{len(tweets)} tweets", end='\r')
# Create a prompt for the completion endpoint
prompt = f"Here is a tweet, give me 5 keywords, each keyword on a new line, that describe what the tweet is about \n\n --- tweet start ---- \n\n {tweet['text']} \n\n --- tweet end ---:"
response_gpt3 = openai.Completion.create(
model="text-davinci-002",
prompt=prompt,
temperature=0.7,
max_tokens=100,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# Update tweet with keywords
tweet.update({'keywords':response_gpt3['choices'][0]['text'].strip()})
This takes a little while so I've added a little counter print(f"Processing {i}/{len(tweets)} tweets", end='\r')
just to make sure it's making progress.
Great, now each of your bookmarks should be expanded with 5 keywords, all decided by GPT-3:
Each keyword has a trailing \n
to add a new line in your Notion database.
The next step is to get Notion API keys to be able to add your bookmarks to a Notion table.
5. Get Notion API keys
Start creating an integration by logging into your Notion account, going to https://www.notion.so/my-integrations, and clicking on the + New integration button.
Give your integration a name, select the workspace where you want to install this integration and the capabilities that the integration will have, then click Submit to create the integration:
Copy the Internal Integration Token on the next page and save it somewhere. We'll use it together with the bookmarks from Twitter later:
The next step is to create a page with a database.
6. Create a Notion table and connect with API
Continue by creating a new page and a new table with the columns Username, Name, Keywords, Tweet, URL, and Tweeted at:
Integrations don't have access to any pages or databases in the workspace at first. We'll have to share each database with the newly created integration. Click Share and pick the integration you created in the previous step and click Invite:
Your integration now has the requested permissions on the new database; the last step is to get the id of the database. Ensure you're viewing the database as a full page if you use an inline database.
If you have a workspace, the database id is the part of the URL after your workspace name and the slash (myworkspace/)
and before the question mark (?)
.
The id is 32 characters long, containing numbers and letters. If you're using the Notion desktop app, click the Share button and select Copy link.
Copy and save the id for your database for the next step:
https://www.notion.so/y9btc56182y429inm1180e9e42w2r177?v=...
|-------- Your database id ------|
7. Add bookmarks to Notion
Now that we have the API keys for your Notion integration let's create a payload with all your bookmarked
Finally, let's make a call and add your bookmarked tweets to your database:
# Get the API key from the environment variable
notion_key = "YOUR_NOTION_INTEGRATION_KEY"
# Get the database ID from the environment variable
notion_database_id = "YOUR_NOTION_DATABASE_ID"
# Set the headers
headers = {
'Authorization': 'Bearer ' + notion_key,
'Content-Type': 'application/json',
'Notion-Version': '2021-08-16'
}
# Create the payload and make request
for i,tweet in enumerate(tweets, start=1):
payload = {
'parent': { 'database_id': notion_database_id },
'properties': {
'title': {
'title': [
{
'text': {
'content': tweet['username']
}
}
]
},
"Keywords": {"rich_text": [{ "type": "text", "text": { "content": tweet['keywords'] } }]},
"Name": {"rich_text": [{ "type": "text", "text": { "content": tweet['name'] } }]},
"Tweet": {"rich_text": [{ "type": "text", "text": { "content": tweet['text'] } }]},
"URL": {'url': f"https://twitter.com/twitter/status/{tweet['id']}"},
"Tweeted at": {"date": {"start": tweet['created_at'] }}
}
}
# Make the request
r = requests.post('https://api.notion.com/v1/pages', headers=headers, data=json.dumps(payload))
# Print the response
print(f"Tweet {i}/{len(tweets)} Response: {r.json()['object']}", end='\r')
You should now have 100 of your bookmarks in your Notion database:
We're all set! You can now filter and sort your bookmarked tweets in your Notion database.
In the last section below, we'll see how you can fetch a new access token after it expired.
8. Refresh your auth token
As mentioned earlier, the access token is only valid for 2 hours. So if you're creating a service that is automatically and regularly fetching your new Twitter bookmarks, you'll need to request a new access token every time.
You can use your refresh token, which is valid for 6 months, to fetch a new access token like this:
token = "YOUR_PREVIOUSLY_SAVED_TOKEN"
refreshed_token = oauth.refresh_token(
client_id=client_id,
client_secret=client_secret,
token_url=token_url,
auth=auth,
refresh_token=token["refresh_token"],
)
Summary
1. Get Twitter developer API credentials
2. Get OAuth 2.0 Twitter API token
3. Fetch Twitter bookmarks
4. Get bookmark keywords with GPT-3 API (optional)
5. Get Notion API keys
6. Create a Notion table and connect with API
7. Add bookmarks to Notion
8. Refresh your auth token
Next steps
1. Repo with source code
Here is the repo with a Jupyter notebook with all the source code if you'd like to implement this on your own ⬇️ https://github.com/norahsakal/automatically-organize-twitter-bookmarks-in-notion
2. Don't feel like coding?
Join the waitlist to get your Twitter bookmarks automatically added to your Notion ⬇️ https://bookmark-organizer.carrd.co