Pandas is so vast and deep that it enables you to execute virtually any tabular manipulation you can think of. However, this vastness sometimes comes *at a disadvantage*.

Many elegant features that solve rare edge-cases, unique scenarios are lost in the documentation, shadowed by the more frequently used functions.

This article aims to rediscover those features and show you that Pandas is more capable than you ever knew.

`ExcelWriter`

`ExcelWriter`

is a generic class for creating excel files (with sheets!) and writing DataFrames to them. Let’s say we have these 2:

# Load two datasets diamonds = sns.load_dataset("diamonds") tips = sns.load_dataset("tips") # Write to the same excel file with pd.ExcelWriter("data/data.xlsx") as writer: diamonds.to_excel(writer, sheet_name="diamonds") tips.to_excel(writer, sheet_name="tips")

It has additional attributes to specify the DateTime format to be used, whether you want to create a new excel file or modify an existing one, what happens when a sheet exists, etc. Check out the details from the documentation.

`pipe`

`pipe`

is one of the best functions for doing data cleaning in a concise, compact manner in Pandas. It allows you to chain multiple custom functions into a single operation.

For example, let’s say you have functions to `drop_duplicates`

, `remove_outliers`

, `encode_categoricals`

that accept their own arguments. Here is how you apply all three in a single operation:

df_preped = (diamonds.pipe(drop_duplicates). pipe(remove_outliers, ['price', 'carat', 'depth']). pipe(encode_categoricals, ['cut', 'color', 'clarity']) )

I like how this function resembles Sklearn pipelines. There is more you can do with it, so check out the documentation or this helpful article.

`factorize`

This function is a pandas alternative to Sklearn’s `LabelEncoder`

:

# Mind the [0] at the end diamonds["cut_enc"] = pd.factorize(diamonds["cut"])[0] >>> diamonds["cut_enc"].sample(5) 52103 2 39813 0 31843 0 10675 0 6634 0 Name: cut_enc, dtype: int64

Unlike `LabelEncoder`

, `factorize`

returns a tuple of two values: the encoded column and a list of the unique categories:

codes, unique = pd.factorize(diamonds["cut"], sort=True) >>> codes[:10] array([0, 1, 3, 1, 3, 2, 2, 2, 4, 2], dtype=int64) >>> unique ['Ideal', 'Premium', 'Very Good', 'Good', 'Fair']

`explode`

A function with an interesting name is `explode`

. Let’s see an example first and then explain:

data = pd.Series([1, 6, 7, [46, 56, 49], 45, [15, 10, 12]]).to_frame("dirty") >>> data

The `dirty`

column has two rows where values are recorded as actual lists. You may often see this type of data in surveys as some questions accept multiple answers.

>>> data.explode("dirty", ignore_index=True)

`explode`

takes a cell with an array-like value and *explodes* it into multiple rows. Set `ignore_index`

to True to keep the ordering of a numeric index.

`squeeze`

Another function with a funky name is `squeeze`

and is used in very rare but annoying edge cases.

One of these cases is when a single value is returned from a condition used to subset a DataFrame. Consider this example:

subset = diamonds.loc[diamonds.index < 1, ["price"]] >>> subset

Even though there is just one cell, it is returned as a DataFrame. This can be annoying since you now have to use `.loc`

again with both the column name and index to access the price.

But, if you know `squeeze`

, you don’t have to. The function enables you to remove an axis from a single-cell DataFrame or Series. For example:

>>> subset.squeeze() 326

Now, only the scalar is returned. It is also possible to specify the axis to remove:

>>> subset.squeeze("columns") # or "rows" 0 326 Name: price, dtype: int64

Note that `squeeze`

only works for DataFrames or Series with single values.

A rather nifty function for boolean indexing numeric features within a range:

# Get diamonds that are priced between 3500 and 3700 dollars diamonds[diamonds["price"]\ .between(3500, 3700, inclusive="neither")].sample(5)

Did you know that Pandas allows you to style DataFrames?

They have a `style`

attribute, which opens doors to customizations and styles only limited by your HTML and CSS knowledge. I won’t discuss the full details of what you can do with `style`

but only show you my favorite functions:

>>> diabetes.describe().T.drop("count", axis=1)\ .style.highlight_max(color="darkred")

Above, we are highlighting cells that hold the maximum value of a column. Another cool styler is `background_gradient`

which can give columns a gradient background color based on their values:

diabetes.describe().T.drop("count", axis=1).style.background_gradient( subset=["mean", "50%"], cmap="Reds" )

This feature comes especially handy when you are using `describe`

on a table with many columns and want to compare summary statistics. Check out the documentation of the styler here.

Like Matplotlib, pandas has global settings that you can tweak to change the default behaviors:https://towardsdatascience.com/media/332319c5e5b19ead164a19909b67d54b

These settings are divided into 5 modules. Let’s see what settings are there under `display`

:

>>> dir(pd.options) ['compute', 'display', 'io', 'mode', 'plotting']

There are many options under `display`

but I mostly use `max_columns`

and `precision`

:

>>> dir(pd.options.display) ['chop_threshold', 'max_columns', 'max_colwidth', 'max_info_columns', 'max_info_rows', 'max_rows', ... 'precision', 'show_dimensions', 'unicode', 'width']

You can check out the documentation to dig deeper into this wonderful feature.

`convert_dtypes`

We all know that pandas has an annoying tendency to mark some columns as `object`

data type. Instead of manually specifying their types, you can use `convert_dtypes`

method which tries to infer the best data type:

sample = pd.read_csv( "data/station_day.csv", usecols=["StationId", "CO", "O3", "AQI_Bucket"], ) >>> sample.dtypes StationId object CO float64 O3 float64 AQI_Bucket object dtype: object >>> sample.convert_dtypes().dtypes StationId string CO float64 O3 float64 AQI_Bucket string dtype: object

Unfortunately, it can’t parse dates due to the caveats of different date-time formats.

`select_dtypes`

A function I use all the time is `select_dtypes`

. I think it is obvious what the function does from its name. It has `include`

and `exclude`

parameters that you can use to select columns including or excluding certain data types.

For example, choose only numeric columns with `np.number`

:

# Choose only numerical columns diamonds.select_dtypes(include=np.number).head()

Or `exclude`

them:

# Exclude numerical columns diamonds.select_dtypes(exclude=np.number).head()

`mask`

`mask`

allows you to quickly replace cell values where a custom condition is true.

For example, let’s say we have survey data collected from people aged 50–60.

# Create sample data ages = pd.Series([55, 52, 50, 66, 57, 59, 49, 60]).to_frame("ages") ages

We will treat ages outside the 50–60 range (there are two, 49, and 66) as data entry mistakes and replace them with NaNs.

ages.mask(cond=~ages["ages"].between(50, 60), other=np.nan)

So, `mask`

replaces values that don’t meet `cond`

with `other`

.

`nlargest`

and `nsmallest`

Sometimes you don’t just want the min/max of a column. You want to see the top N or ~(top N) values of a variable. This is where `nlargest`

and `nsmallest`

comes in handy.

Let’s see the top 5 most expensive and cheapest diamonds:

diamonds.nlargest(5, “price”)

diamonds.nsmallest(5, "price")

`idxmax`

and `idxmin`

When you call `max`

or `min`

on a column, pandas returns the value that is largest/smallest. However, sometimes you want the *position* of the min/max, which is not possible with these functions.

Instead, you should use `idxmax`

/`idxmin`

:

>>> diamonds.price.idxmax() 27749 >>> diamonds.carat.idxmin() 14

You can also specify the `columns`

axis, in which case the functions return the index number of the column.

`value_counts`

with `dropna=False`

A common operation to find the percentage of missing values is to chain `isnull`

and `sum`

and divide by the length of the array.

But, you can do the same thing with `value_counts`

with relevant arguments:

ames_housing = pd.read_csv("data/train.csv") >>> ames_housing["FireplaceQu"].value_counts(dropna=False, normalize=True) NaN 0.47260 Gd 0.26027 TA 0.21438 Fa 0.02260 Ex 0.01644 Po 0.01370 Name: FireplaceQu, dtype: float64

Fireplace quality of Ames housing dataset consists of 47% nulls.

`clip`

Outlier detection and removal are common in data analysis.

`clip`

function makes it really easy to find outliers outside a range and replace them with the hard limits.

Let’s go back to the ages example:

This time, we will replace the out-of-range ages with the hard limits of 50 and 60:

>>> ages.clip(50, 60)

Fast and efficient!

`at_time`

and `between_time`

These two can be useful when working with time series that have high granularity.

`at_time`

allows you to subset values at a specific date or time. Consider this time series:

index = pd.date_range("2021-08-01", periods=100, freq="H") data = pd.DataFrame({"col": list(range(100))}, index=index) >>> data.head()

Let’s select all rows at 3 PM:

>>> data.at_time("15:00")

Cool, huh? Now, let’s use `between_time`

to select rows within a custom interval:

from datetime import datetime >>> data.between_time("09:45", "12:00")

Note that both functions require a DateTimeIndex, and they only work with times (as in *o’clock*). If you want to subset within a DateTime interval, use `between`

.

`bdate_range`

`bdate_range`

is a short-hand function to create TimeSeries indices with business-day frequency:

series = pd.bdate_range("2021-01-01", "2021-01-31") # A period of one month >>> len(series) 21

Business-day frequencies are common in the financial world. So, this function may come in handy when reindexing existing time-series with `reindex`

function.

`autocorr`

One of the critical components in time-series analysis is examining the autocorrelation of a variable.

Autocorrelation is the plain-old correlation coefficient, but it is calculated with the lagging version of a time series.

In more detail, the autocorrelation of a time series at `lag=k`

is calculated as follows:

- The time-series is shifted till
`k`

periods:

time_series = tips[["tip"]] time_series["lag_1"] = time_series["tip"].shift(1) time_series["lag_2"] = time_series["tip"].shift(2) time_series["lag_3"] = time_series["tip"].shift(3) time_series["lag_4"] = time_series["tip"].shift(4) # time_series['lag_k'] = time_series['tip'].shift(k) >>> time_series.head()

2. Correlation is calculated between the original `tip`

and each `lag_*`

.

Instead of doing all this manually, you can use the `autocorr`

function of Pandas:

# Autocorrelation of tip at lag_10 >>> time_series["tip"].autocorr(lag=8) 0.07475238789967077

You can read more about the importance of autocorrelation in time-series analysis from this post.

`hasnans`

Pandas offers a quick method to check if a given series contains any nulls with `hasnans`

attribute:

series = pd.Series([2, 4, 6, "sadf", np.nan]) >>> series.hasnans True

According to its documentation, it enables various performance increases. Note that the attribute works only on `pd.Series`

.

`at`

and `iat`

These two accessors are much faster alternatives to `loc`

and `iloc`

with a disadvantage. They only allow selecting or replacing a single value at a time:

# [index, label] >>> diamonds.at[234, "cut"] 'Ideal' # [index, index] >>> diamonds.iat[1564, 4] 61.2 # Replace 16541th row of the price column >>> diamonds.at[16541, "price"] = 10000

`argsort`

You should use this function when you want to extract the indices that would sort an array:

tips.reset_index(inplace=True, drop=True) sort_idx = tips["total_bill"].argsort(kind="mergesort") # Now, sort `tips` based on total_bill tips.iloc[sort_idx].head()

`cat`

accessorIt is common knowledge that Pandas enables to use built-in Python functions on dates and strings using accessors like `dt`

or `str`

.

Pandas also has a special `category`

data type for categorical variables as can be seen below:

>>> diamonds.dtypes carat float64 cut category color category clarity category depth float64 table float64 price int64 x float64 y float64 z float64 cut_enc int64 dtype: object

When a column is `category`

, you can use several special functions using the `cat`

accessor. For example, let’s see the unique categories of diamond cuts:

>>> diamonds["cut"].cat.categories ['Ideal', 'Premium', 'Very Good', 'Good', 'Fair']

There are also functions like `remove_categories`

or `rename_categories`

, etc.

diamonds["new_cuts"] = diamonds["cut"].cat.rename_categories(list("ABCDE")) >>> diamonds["new_cuts"].cat.categories Index(['A', 'B', 'C', 'D', 'E'], dtype='object')

You can see the full list of functions under the `cat`

accessor here.

`GroupBy.nth`

This function only works with `GroupBy`

objects. Specifically, after grouping, `nth`

returns the nth row from each group:

>>> diamonds.groupby("cut").nth(5)

Even though libraries like Dask and Datatable are slowly winning over Pandas with their shiny new features for handling massive datasets, Pandas remains the most widely-used data manipulation tool in the Python data science ecosystem.

The library is a role model for other packages to imitate and improve upon, as it integrates into the modern SciPy stack so well.

Reference: