|

How to Build an End-to-End Interactive Analytics Dashboard Using PyGWalker Features for Insightful Data Exploration

🚀

In this tutorial, we discover the superior capabilities of PyGWalker, a strong device for visible information evaluation that integrates seamlessly with pandas. We start by producing a practical e-commerce dataset enriched with time, demographic, and advertising options to mimic real-world enterprise information. We then put together a number of analytical views, together with day by day gross sales, class efficiency, and buyer phase summaries. Finally, we use PyGWalker to interactively discover patterns, correlations, and developments throughout these dimensions via intuitive drag-and-drop visualizations. Check out the FULL CODES here.

!pip set up pygwalker pandas numpy scikit-learn


import pandas as pd
import numpy as np
import pygwalker as pyg
from datetime import datetime, timedelta

We start by establishing our surroundings, putting in all essential dependencies, and importing important libraries, together with pandas, numpy, and pygwalker. We be certain that all the pieces is prepared for constructing our interactive information exploration workflow in Colab. Check out the FULL CODES here.

def generate_advanced_dataset():
   np.random.seed(42)
   start_date = datetime(2022, 1, 1)
   dates = [start_date + timedelta(days=x) for x in range(730)]
   classes = ['Electronics', 'Clothing', 'Home & Garden', 'Sports', 'Books']
   merchandise = {
       'Electronics': ['Laptop', 'Smartphone', 'Headphones', 'Tablet', 'Smartwatch'],
       'Clothing': ['T-Shirt', 'Jeans', 'Dress', 'Jacket', 'Sneakers'],
       'Home & Garden': ['Furniture', 'Lamp', 'Rug', 'Plant', 'Cookware'],
       'Sports': ['Yoga Mat', 'Dumbbell', 'Running Shoes', 'Bicycle', 'Tennis Racket'],
       'Books': ['Fiction', 'Non-Fiction', 'Biography', 'Science', 'History']
   }
   n_transactions = 5000
   information = []
   for _ in vary(n_transactions):
       date = np.random.selection(dates)
       class = np.random.selection(classes)
       product = np.random.selection(merchandise[category])
       base_prices = {
           'Electronics': (200, 1500),
           'Clothing': (20, 150),
           'Home & Garden': (30, 500),
           'Sports': (25, 300),
           'Books': (10, 50)
       }
       value = np.random.uniform(*base_prices[category])
       amount = np.random.selection([1, 1, 1, 2, 2, 3], p=[0.5, 0.2, 0.15, 0.1, 0.03, 0.02])
       customer_segment = np.random.selection(['Premium', 'Standard', 'Budget'], p=[0.2, 0.5, 0.3])
       age_group = np.random.selection(['18-25', '26-35', '36-45', '46-55', '56+'])
       area = np.random.selection(['North', 'South', 'East', 'West', 'Central'])
       month = date.month
       seasonal_factor = 1.0
       if month in [11, 12]:
           seasonal_factor = 1.5
       elif month in [6, 7]:
           seasonal_factor = 1.2
       income = value * amount * seasonal_factor
       low cost = np.random.selection([0, 5, 10, 15, 20, 25], p=[0.4, 0.2, 0.15, 0.15, 0.07, 0.03])
       marketing_channel = np.random.selection(['Organic', 'Social Media', 'Email', 'Paid Ads'])
       base_satisfaction = 4.0
       if customer_segment == 'Premium':
           base_satisfaction += 0.5
       if low cost > 15:
           base_satisfaction += 0.3
       satisfaction = np.clip(base_satisfaction + np.random.regular(0, 0.5), 1, 5)
       information.append({
           'Date': date, 'Category': class, 'Product': product, 'Price': spherical(value, 2),
           'Quantity': amount, 'Revenue': spherical(income, 2), 'Customer_Segment': customer_segment,
           'Age_Group': age_group, 'Region': area, 'Discount_%': low cost,
           'Marketing_Channel': marketing_channel, 'Customer_Satisfaction': spherical(satisfaction, 2),
           'Month': date.strftime('%B'), 'Year': date.yr, 'Quarter': f'Q{(date.month-1)//3 + 1}'
       })
   df = pd.DataBody(information)
   df['Profit_Margin'] = spherical(df['Revenue'] * (1 - df['Discount_%']/100) * 0.3, 2)
   df['Days_Since_Start'] = (df['Date'] - df['Date'].min()).dt.days
   return df

We design a operate to generate a complete e-commerce dataset that mirrors real-world enterprise situations. We embody product classes, buyer demographics, seasonal results, and satisfaction ranges, guaranteeing that our information is numerous and analytically wealthy. Check out the FULL CODES here.

print("Generating superior e-commerce dataset...")
df = generate_advanced_dataset()
print(f"nDataset Overview:")
print(f"Total Transactions: {len(df)}")
print(f"Date Range: {df['Date'].min()} to {df['Date'].max()}")
print(f"Total Revenue: ${df['Revenue'].sum():,.2f}")
print(f"nColumns: {record(df.columns)}")
print("nFirst few rows:")
print(df.head())

We execute the dataset technology operate and show key insights, together with whole transactions, income vary, and pattern data. We get a transparent snapshot of the info’s construction and ensure that it’s appropriate for detailed evaluation. Check out the FULL CODES here.

daily_sales = df.groupby('Date').agg({
   'Revenue': 'sum', 'Quantity': 'sum', 'Customer_Satisfaction': 'imply'
}).reset_index()


category_analysis = df.groupby('Category').agg({
   'Revenue': ['sum', 'mean'], 'Quantity': 'sum', 'Customer_Satisfaction': 'imply', 'Profit_Margin': 'sum'
}).reset_index()
category_analysis.columns = ['Category', 'Total_Revenue', 'Avg_Order_Value',
                            'Total_Quantity', 'Avg_Satisfaction', 'Total_Profit']


segment_analysis = df.groupby(['Customer_Segment', 'Region']).agg({
   'Revenue': 'sum', 'Customer_Satisfaction': 'imply'
}).reset_index()


print("n" + "="*50)
print("DATASET READY FOR PYGWALKER VISUALIZATION")
print("="*50)

We carry out information aggregations to put together a number of analytical views, together with time-based developments, category-level summaries, and efficiency metrics for buyer segments. We manage this info to make it simply visualizable in PyGWalker. Check out the FULL CODES here.

print("n🚀 Launching PyGWalker Interactive Interface...")
walker = pyg.stroll(
   df,
   spec="./pygwalker_config.json",
   use_kernel_calc=True,
   theme_key='g2'
)


print("n✅ PyGWalker is now working!")
print("💡 Try creating these visualizations:")
print("   - Revenue development over time (line chart)")
print("   - Category distribution (pie chart)")
print("   - Price vs Satisfaction scatter plot")
print("   - Regional gross sales heatmap")
print("   - Discount effectiveness evaluation")

We launch the PyGWalker interactive interface to visually discover our dataset. We create significant charts, uncover developments in gross sales, satisfaction, and pricing, and observe how interactive visualization enhances our analytical understanding.

Data View
Visualization
Chat with Data

In conclusion, we developed a complete information visualization workflow utilizing PyGWalker, encompassing dataset technology, function engineering, multidimensional evaluation, and interactive exploration. We expertise how PyGWalker transforms uncooked tabular information into wealthy, exploratory dashboards without having complicated code or BI instruments. Through this train, we strengthen our capability to derive insights rapidly, experiment visually, and join information storytelling instantly to sensible enterprise understanding.


Check out the FULL CODES here. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to observe us on Twitter and don’t neglect to be part of our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The publish How to Build an End-to-End Interactive Analytics Dashboard Using PyGWalker Features for Insightful Data Exploration appeared first on MarkTechPost.

Similar Posts