Predicting Real Estate Prices: A Journey into Data Science and Market Trends

Remi Tang
13 min readJul 30, 2024

--

In 2019, I embarked on an exciting project aimed at predicting real estate prices and analyzing their trends over time. This venture into the world of data science provided valuable insights and practical skills that I am eager to share with you. Through this article, I will walk you through the steps I took, the methodologies I employed, and the intriguing results I discovered.

Photo by Tierra Mallorca on Unsplash

My name is Remi TANG, and I am passionate about data analysis and solving complex problems. In 2019, I completed a rigorous three-month data science bootcamp at Jedha Bootcamp from March to May. During this time, France had just released public data on property transactions from 2014 to 2018 on the data.gouv.fr website. Motivated by this newly available data, I embarked on an intensive two-week project focused on estimating property values in Ile-de-France and analyzing the evolution of real estate prices over time.

Here is the link to GitHub repository containing the complete project, and the demonstration of the project on my website.

Feel free to connect with me on LinkedIn or GitHub to discuss data science, real estate, or any other topics of interest!

In embarking on this project, I meticulously followed the established steps of a data scientist, focusing on clear goals, meticulous data collection, thorough preprocessing and preparation, in-depth data analysis, and applying machine learning techniques to predict outcomes.

Table of Contents

  1. Data collection
  2. Data pre-processing
    — Delete data without estate prices
    — Merge columns
    — Remove ireelevant features
    — Format column type
  3. Data filling and cleaning
    — Postal Code and City
    — Actual Built Surface and Number of Main Rooms
    — Nature of Land and Land Surface Area
  4. Data analysis
    — Remove extreme transaction values
    — Apply statistical rules to remove outliers
  5. Machine Learning
    Model 1: Predicting Real Estate Prices
    Model 2: Predicting Price Trends with LSTM
    — — Step 1: Collected Property Price Data
    — — Step 2: LSTM Implementation and Model Evaluation
    — — Step 3: JSON File Creation
  6. Implementation on my Website

Project Purpose

Beginning with a focused approach, I chose to restrict my analysis to property data within the Ile-de-France region, recognizing the complexity and volume of broader datasets. With this focused dataset, my primary goals were twofold:

  1. Estimate Property Prices:
  • Identify key factors influencing property prices in the Ile-de-France region.
  • Utilize location and size as primary variables to predict apartment prices.

2. Predict Real Estate Price Evolution:

  • Analyze historical trends in real estate prices across cities and departments.
  • Develop predictive models to forecast property price changes between 2018–2019 and 2019–2020.

Data collection

I sourced the data from data.gouv.fr, which contains all property transactions in France from 2014 to 2018. The dataset consisted of five different files, one for each year. After merging these files, I had a combined dataset of 13,903,117 rows and 43 features.

import os
os.chdir("/path/to/your/folder")

import pandas as pd
year = ["2014", "2015", "2016", "2017", "2018"]

#All the estate transaction data are in a folder containing 2014.txt, 2015.txt, 2016.txt, 2017.txt, 2018.txt
data = pd.read_csv(year[0] + ".txt", on_bad_lines='skip', delimiter = "|", low_memory = False)
for year in year[1:]:
add_data = pd.read_csv(year + ".txt", on_bad_lines='skip', delimiter = "|", low_memory = False)
data = pd.concat([data, add_data], ignore_index = True)

print(data.shape)
#(13903117, 43)

To focus my analysis, I filtered the dataset to include only transactions in Ile-de-France, reducing the dataset to 1,914,290 rows. For ease of understanding and analysis, I translated the column names from French to English.

column_translation = {
'Code service CH': 'Service Code CH',
'Reference document': 'Document Reference',
'1 Articles CGI': 'Article 1 CGI',
'2 Articles CGI': 'Article 2 CGI',
'3 Articles CGI': 'Article 3 CGI',
'4 Articles CGI': 'Article 4 CGI',
'5 Articles CGI': 'Article 5 CGI',
'No disposition': 'Disposition Number',
'Date mutation': 'Transaction Date',
'Nature mutation': 'Nature of Transaction',
'Valeur fonciere': 'Property Value',
'No voie': 'Street Number',
'B/T/Q': 'B/T/Q',
'Type de voie': 'Street Type',
'Code voie': 'Street Code',
'Voie': 'Street Name',
'Code postal': 'Postal Code',
'Commune': 'City',
'Code departement': 'Department Code',
'Code commune': 'City Code',
'Prefixe de section': 'Section Prefix',
'Section': 'Section',
'No plan': 'Plan Number',
'No Volume': 'Volume Number',
'1er lot': '1st Lot',
'Surface Carrez du 1er lot': 'Carrez Law Surface Area of 1st Lot',
'2eme lot': '2nd Lot',
'Surface Carrez du 2eme lot': 'Carrez Law Surface Area of 2nd Lot',
'3eme lot': '3rd Lot',
'Surface Carrez du 3eme lot': 'Carrez Law Surface Area of 3rd Lot',
'4eme lot': '4th Lot',
'Surface Carrez du 4eme lot': 'Carrez Law Surface Area of 4th Lot',
'5eme lot': '5th Lot',
'Surface Carrez du 5eme lot': 'Carrez Law Surface Area of 5th Lot',
'Nombre de lots': 'Number of Lots',
'Code type local': 'Local Type Code',
'Type local': 'Local Type',
'Identifiant local': 'Local Identifier',
'Surface reelle bati': 'Actual Built Surface',
'Nombre pieces principales': 'Number of Main Rooms',
'Nature culture': 'Nature of Land',
'Nature culture speciale': 'Special Nature of Land',
'Surface terrain': 'Land Surface Area'
}

# Rename columns
data.rename(columns=column_translation, inplace=True)

# Keep data only in Ile-De-France
IDF = data[data["Department Code"].isin(['75', '77', '78', '91', '92', '93', '94', '95'])]

print(IDF.shape)
# (1914290, 43)

Data pre-processing

After an initial check of the dataset, I noticed that many entries were missing the price of the estate. Additionally, there were several columns that could be merged, such as the number of the street and the name of the street, while others could be deleted altogether. Here’s a summary of the steps I took to prepare the data:

  1. Delete data without estate prices: I removed all entries where the property price was missing.
  2. Merge columns: I combined the columns for Street Number, B/T/Q, Street Type, and Street Name into a single “Address” column to simplify the dataset.
  3. Remove irrelevant features: I eliminated irrelevant columns and columns related to outbuildings and industrial or commercial premises, as they were not relevant to my analysis.
  4. Transform estate price feature to float and transaction date to datetime: I converted the property price data to a float format and the transaction time to datetime format to ensure numerical operations and time series analysis could be performed.
IDF.isna()["Property Value"].value_counts()
# False: 1902234
# True: 12056 (Missing Value)

# 1. Delete data without Property Value
IDF = IDF[IDF.isna()["Property Value"] == False]

# 2. Merge columns to get one column "Address"
address = IDF.loc[:,['Street Number', 'Street Type', 'Street Name']]
merged = []

def isNotNan(x):
return x == x

for i in range(len(address)):
s_num, s_type, s_name = "", "", ""
if isNotNan(address.iloc[i,0]):
s_num = address.iloc[i,0]
s_num = str(int(s_num)) + " "

if isNotNan(address.iloc[i,1]):
s_type = address.iloc[i,1]
s_type = str(s_type) + " "

if isNotNan(address.iloc[i,2]):
s_name = address.iloc[i,2]
s_name = str(s_name) + " "

merged.append(s_num + s_type + s_name)
IDF["Address"] = merged

# 3. Drop some irrelevant features and keep only apartment and house
IDF = IDF.drop(['Service Code CH', 'Document Reference',
'Article 1 CGI', 'Article 2 CGI', 'Article 3 CGI', 'Article 4 CGI', 'Article 5 CGI',
'Street Number', 'B/T/Q', 'Street Type', 'Street Name'], axis=1)

IDF = IDF[IDF["Local Type"].isin(['Appartement', 'Maison'])]

# 4. Transform estate price to float and transaction date to datetime
IDF["Property Value"] = IDF["Property Value"].replace(to_replace=r',', value='.', regex=True)
IDF["Property Value"] = IDF["Property Value"].astype(float)
IDF["Transaction Date"] = pd.to_datetime(IDF["Transaction Date"], format='%d/%m/%Y')

After these steps, I was left with 930,989 rows of transactions and 17 features. The final features in the dataset are:

  • Transaction Date: The date of the transaction.
  • Nature of Transaction: The nature of the transaction (e.g., sale, donation).
  • Property Value: The property value.
  • Address: The full address (merged from Street Number, B/T/Q, Street Type, and Street Name).
  • Street Code: The street code
  • Postal Code: The postal code.
  • City: The city or town.
  • Department Code: The department code.
  • City Code: The city code.
  • Section: The cadastral section.
  • Plan Number: The cadastral plan number.
  • Local Type Code: The local type code.
  • Local Type: The type of premises (e.g., house, apartment).
  • Actual Built Suface: The actual built surface area.
  • Number of Main Rooms: The number of main rooms.
  • Nature of Land: The nature of land.
  • Land Surface Area: The surface area of the land.

Data filling and cleaning

After reducing the number of columns, I focused on addressing missing values in my dataset. Upon running my algorithm, I found missing data in the following important columns: Postal Code, City, Actual Built Surface, Number of Main Rooms, Nature of Land, Land Surface Area.

for i in range(len(IDF.columns)):
print(IDF.columns[i] + " : " +str(IDF[IDF.iloc[:,i].isna()].shape[0]))

"""
Transaction Date : 0
Nature of Transaction : 0
Property Value : 0
Address : 0
Street Code : 0
Postal Code : 604
City : 593
Department Code : 0
City Code : 0
Section : 0
Plan Number : 0
Local Type Code : 0
Local Type : 0
Actual Built Surface : 31
Number of Main Rooms : 31
Nature of Land : 582032
Land Surface Area : 582032
"""

To address these missing values, I followed a three-step process:

  1. Postal Code and City: These can be deduced from other columns such as the department code and city code. By cross-referencing available data, I was able to fill in these missing values accurately.
  2. Actual Built Surface and Number of Main Rooms: Since these features are crucial for our project, any rows with missing values in these columns were removed.
  3. Nature of Land and Land Surface Area: For the ‘Nature of Land’ column, I filled in the missing values with “not specified”. For the ‘Land Surface Area’ column, I set the missing values to 0, as these were not the most critical features for the analysis.

This thorough treatment of missing values ensured that the dataset was as complete and accurate as possible, allowing for more reliable analysis and modeling.

# 1. Postal Code and City
IDF.loc[IDF["City Code"] == 118, ['City', "Postal Code"]] = "PARIS 18", 75018
IDF.loc[IDF["City Code"] == 117, ['City', "Postal Code"]] = "PARIS 18", 75017
IDF.loc[IDF["City Code"] == 116, ['City', "Postal Code"]] = "PARIS 18", 75016
IDF.loc[IDF["City Code"] == 115, ['City', "Postal Code"]] = "PARIS 18", 75015
IDF.loc[IDF["City Code"] == 114, ['City', "Postal Code"]] = "PARIS 18", 75014
IDF.loc[IDF["City Code"] == 107, ['City', "Postal Code"]] = "PARIS 18", 75007
IDF.loc[IDF["City Code"] == 106, ['City', "Postal Code"]] = "PARIS 18", 75006

from time import sleep
import numpy as np
import requests
import json
import string

table = str.maketrans('àâäéèëêùûüîïôöÿñÉ', 'aaaeeeeuuuiiooynE')

for i in IDF[IDF["Postal Code"].isna()].index:
code = str(IDF.loc[i, "Department Code"]) + str("{:03d}".format(IDF.loc[i, "City Code"]))
url = "https://geo.api.gouv.fr/communes?code={}"
r = requests.get(url.format(code))

sleep(np.random.random_sample(1)[0]/20)
name = r.json()[0]["nom"]
postal_code = r.json()[0]["codesPostaux"][0]
name = name.translate(table).upper()
if name == "BOIS-D'ARCY":
name == "BOIS D ARCY"
IDF.loc[i, "Postal Code"] = postal_code
IDF.loc[i, "City"] = name

# 2. Actual Built Surface and Number of Main Rooms
IDF = IDF[IDF["Number of Main Rooms"].isna() == False]
IDF = IDF[IDF["Actual Built Surface"].isna() == False]

# 3. Nature of Land and Land Surface Area
IDF["Nature of Land"] = IDF["Nature of Land"].fillna("Not specified")
IDF["Land Surface Area"] = IDF["Land Surface Area"].fillna(0)

After completing these tasks, there were no more missing elements in the dataset.

Data analysis

Now that there are no more missing values, I analyzed the data to understand its characteristics. Upon a closer look, I noticed the presence of anomalous values. To address this, I took the following steps:

  1. Remove extreme transaction values: I deleted transactions with values below 1,000 euros and those exceeding 100,000,000 euros.
  2. Apply statistical rules to remove outliers: I calculated the minimum value (min_vf) as Q1 — 1,5 * IQR and the maximum value (max_vf) as Q3 — 1,3 * IQR, and removed any values falling outside this range to ensure a more accurate and reliable dataset.
# Analyse data
import seaborn as sns
sns.relplot(x = "Property Value", y = "Actual Built Surface", data=IDF)

# 1. Remove extreme transaction values
IDF = IDF[IDF["Property Value"] > 1000]
IDF = IDF[IDF["Property Value"] < 100000000]

# 2. Apply statistical rules to remove outliers
def outlier(IDF, feature):
description = IDF.describe()
q1_vf = description.iloc[4,:][feature]
q3_vf = description.iloc[6,:][feature]
e_q_vf = q3_vf - q1_vf
min_vf = q1_vf - 1.5*(e_q_vf)
max_vf = q3_vf + 1.5*(e_q_vf)
return min_vf, max_vf

IDF.shape #(914099, 17)

# Export Dataset
IDF.to_csv(path_or_buf="IDF.csv", index=False)

Finally, I was left with 914,099 rows of transactions and 17 features.

Machine Learning

In the next phase of my project, I focused on building machine learning models to achieve two primary goals:

  1. Predict real estate prices based on location and property characteristics.
  2. Predict the trend (increase or decrease) in property prices over time.

Model 1: Predicting Real Estate Prices

I developed a model to predict property values based on various features such as Transaction Date, City, Actual Built Surface, Number of Main Rooms, Street Code, and Section. To achieve this, I followed the basic train-test split method and experimented with several models, including Linear Regression, GridSearchCV, and Random Forest.

I won’t delve deeply into the specifics of this process, but here is the code that outlines what I implemented:

# Import Dataset
IDF = pd.read_csv("IDF.csv")

# Only get the interesting features
IDF_reg = IDF[['Property Value',
'Actual Built Surface',
'City',
'Number of Main Rooms'
]]
IDF_reg = pd.get_dummies(IDF_reg)

X = IDF_reg.iloc[:, 1:]
y = IDF_reg.iloc[:, 0]

# Train test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)

# Linear Regression Model
from sklearn import linear_model

lm = linear_model.LinearRegression()
model = lm.fit(X_train,y_train)

lm.score(X_train,y_train)
lm.score(X_test,y_test) # R²

# GridSearchCV
from sklearn.model_selection import GridSearchCV
parameters_rdf = {
'max_depth': np.arange(1,10),
'n_estimators': np.arange(1,100)
}

gdcv_rdf = GridSearchCV(RandomForestRegressor(), parameters_rdf, verbose=2)
mod_gdcv = gdcv_rdf.fit(X_train, y_train)

mod_rdf = mod_gdcv.best_estimator_
mod_rdf = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=9,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=69, n_jobs=None,
oob_score=False, random_state=None, verbose=0, warm_start=False)

mod_rdf.fit(X_train, y_train)

mod_rdf.score(X_train, y_train)
mod_rdf.score(X_test, y_test)

# RandomForest
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 10, max_depth=3, random_state = 0)
rf.fit(X_train, y_train)

rf.score(X_train, y_train)
rf.score(X_test, y_test)

Model 2: Predicting Price Trends with LSTM

For the second model, I used a Long Short-Term Memory (LSTM) neural network to predict the trend of real estate prices. The goal was to forecast the price per square meter and its evolution over time for each department and city. To achieve this, I followed several steps before reaching my conclusion.

  1. Collected Property Price Data
  • Collected property price data for each department and city from 2014 to 2018.
  • Calculated the price per square meter for each year.
  • Determined the year-on-year price evolution for each period.

2. LSTM Implementation and Model Evaluation

  • Implemented the LSTM model to predict the price per square meter for 2019 and 2020.
  • Additionally, forecasted the price evolution for the periods 2018–2019 and 2019–2020.
  • Evaluated the model using metrics such as mean absolute error (MAE) and root mean squared error (RMSE).
  • Considered the number of data points to ensure robust predictions.

3. JSON File Creation

  • Compiled the data into a JSON file, summarizing the price per square meter and its evolution for each department and city.
  • This JSON file served as the input for the LSTM model.

By carefully processing the data and utilizing the LSTM model, I aimed to provide accurate predictions for real estate price trends, assisting in better understanding and forecasting the real estate market dynamics.

Step 1: Collected Property Price Data

  1. Creating a Blank Table: I started by creating a blank table to store all the necessary data.
  2. Adding price_m2 Column: I added a new column called price_m2, calculated by dividing the total property price by the built surface area to obtain the price per square meter.
  3. Deleting Outlier Values: To ensure data accuracy, I removed outlier values, filtering out property prices that were unrealistically low or high.
  4. Calculating Mean Price per Square Meter for Each Year: For each department, I calculated the mean price per square meter for each year and added this information to a year table.
  5. Calculating Year-on-Year Price Evolution: I calculated the year-on-year price evolution for each department and included this data in the year table.
# Import Dataset
IDF = pd.read_csv("IDF.csv")

# Create blank table
d2014 = []
d2015 = []
d2016 = []
d2017 = []
d2018 = []
d2019 = []
d2020 = []
d2014_2015 = []
d2015_2016 = []
d2016_2017 = []
d2017_2018 = []
d2018_2019 = []
d2019_2020 = []
error = []
length = []

department_code = IDF["Department Code"].value_counts().index
departments = [IDF[IDF["Department Code"] == department_code[i]] for i in range(len(department_code))]
data_preds = []

# Remove outlier
for department in departments:
department["price_m2"] = department["Property Value"] / department["Actual Built Surface"]
min_m2, max_m2 = outlier(department, "price_m2")
min_vf, max_vf = outlier(department, "Property Value")
department = department[department["price_m2"] > min_m2]
department = department[department["price_m2"] < max_m2]
department = department[department["Property Value"] > min_vf]
department = department[department["Property Value"] < max_vf]
length.append(department.shape[0])

# Calculating mean price per square meter for each year and year-on-year price evolution
for department in departments:
time = department.iloc[:,[0,-1]]
time["Transaction Date"] = pd.to_datetime(time["Transaction Date"])
time = time.sort_values("Transaction Date")
data_time = pd.DataFrame(time)
data_time = data_time.set_index('Transaction Date')
data_pred = data_time['price_m2'].resample('YS').mean()
data_pred = pd.DataFrame(data_pred)
data_preds.append(data_pred)

d2014.append(round(data_pred.iloc[0,0],2))
d2015.append(round(data_pred.iloc[1,0],2))
d2016.append(round(data_pred.iloc[2,0],2))
d2017.append(round(data_pred.iloc[3,0],2))
d2018.append(round(data_pred.iloc[4,0],2))

d2014_2015.append(round(((data_pred.iloc[1,0] - data_pred.iloc[0,0])*100)/data_pred.iloc[0,0],3))
d2015_2016.append(round(((data_pred.iloc[2,0] - data_pred.iloc[1,0])*100)/data_pred.iloc[1,0],3))
d2016_2017.append(round(((data_pred.iloc[3,0] - data_pred.iloc[2,0])*100)/data_pred.iloc[2,0],3))
d2017_2018.append(round(((data_pred.iloc[4,0] - data_pred.iloc[3,0])*100)/data_pred.iloc[3,0],3))

Step 2: LSTM Implementation and Model Evaluation

For simplicity, I focused on a single department, although the process can be looped to apply it across all departments.

  1. Normalization: I normalized the dataset to ensure the data was scaled appropriately for the model.
  2. LSTM Model Implementation: I implemented the LSTM model using Keras and trained it on the normalized data.
  3. Model Evaluation: I evaluated the model’s performance using relevant metrics.
  4. Appending Predicted Values: I appended the predicted values to my table, adding the forecasted price per square meter and its evolution for each year.
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error

data_pred = data_preds[0]

# Normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
scaler_fit = scaler.fit(data_pred)
dataset = scaler_fit.transform(data_pred)
dataset = pd.DataFrame(dataset, columns=["X"])

windows_size = 6
dataset_new = dataset
for i in range(windows_size):
dataset = pd.concat([dataset, dataset_new.shift(-(i+1))], axis = 1)

dataset.dropna(axis=0, inplace=True)
X = dataset.iloc[:,:-1]
Y = dataset.iloc[:,-1]

# LSTM model with Keras
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import Activation
model = Sequential()
model.add(LSTM(input_shape=(windows_size, 1), output_dim = windows_size, return_sequences = True))
model.add(Dropout(0.5))
model.add(LSTM(256))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation("linear"))
model.compile(loss='mse', optimizer='adam',
metrics=['mean_squared_error'])

model.fit(trainX.values.reshape(trainX.shape[0], trainX.shape[1], 1), trainY, epochs=50, batch_size=1, verbose=2)

# Evaluate model
score = model.evaluate(trainX.values.reshape(trainX.shape[0], trainX.shape[1], 1), trainY)
error = append(score)

# Add new prediction in table
new_predict = []
new_data = []
for i in range(0, windows_size):
new_data.append(X.iloc[-1, i])
new_data.append(Y.iloc[-1])

for j in range(24):
del new_data[0]
if len(new_predict) > 0:
new_data.append(new_predict[-1])
test2018 = pd.DataFrame([new_data])
predicted2018 = model.predict(test2018.values.reshape(test2018.shape[0], test2018.shape[1], 1))
new_predict.append(predicted2018[0][0])

new_predict = scaler.inverse_transform([new_predict])
new_predict = pd.Series(new_predict[0])

d2019.append(round(new_predict[:12].mean(), 2))
d2020.append(round(new_predict[12:].mean(), 2))

d2018_2019.append(round(((d2019[0] - d2018[0])*100)/d2018[0],3))
d2019_2020.append(round(((d2020[0] - d2019[0])*100)/d2019[0],3))

Step 3: JSON File Creation

In this step, I compiled all the data into a JSON file. For each department and city, I added the mean price per square meter from 2014 to 2018, as well as the predicted prices for 2019 and 2020. Additionally, I included the year-on-year price evolution from 2014 to 2018 and the predicted evolution for 2018–2019 and 2019–2020, along with the error and the number of data points for each city and department.

By looping through all departments and cities, I created a comprehensive JSON file that encapsulates all the historical and predicted data.

    
os.chdir("/path/to/your/folder/donneesgeo")

last_dep_json = pd.read_json("departments.json")

dep_json = pd.DataFrame()

dep_json["Department Code"] = department["Department Code"]
dep_json["2014"] = d2014
dep_json["2015"] = d2015
dep_json["2016"] = d2016
dep_json["2017"] = d2017
dep_json["2018"] = d2018
dep_json["2019"] = d2019
dep_json["2020"] = d2020
dep_json["2014 - 2015"] = d2014_2015
dep_json["2015 - 2016"] = d2015_2016
dep_json["2016 - 2017"] = d2016_2017
dep_json["2017 - 2018"] = d2017_2018
dep_json["2018 - 2019"] = d2018_2019
dep_json["2019 - 2020"] = d2019_2020
dep_json["error"] = error
dep_json["length"] = length

new_dep_json = pd.concat([last_dep_json, dep_json
], ignore_index = True)

new_dep_json.to_json("departments.json")

Implementation on my Website

With the JSON file ready, I developed a website using OpenStreetMap to visualize the data. This website displays a map where users can see the price per square meter for each department or city and the year-by-year evolution.

You can explore the website here, view the open-source code on my GitHub repository, and check out the slide presentation here.

Thank you for reading until the end! 😊 I am Remi, a tech enthusiast passionate about solving complex problems!

--

--

No responses yet