Example Generating and using Datasets#

This example shows how you can use expttools for an experiment that generates multiple rows of results, or a whole dataset.

The second part of the example shows how to use experiment tools when you pass a dataset as a parameter including how Experiment objects can accept ExperimentResult objects.

This flow would be used in machine learning type experiments, for example, where the data generating experiment might be:

  • different pre-processing

  • synthetic data generated in different ways

  • different subset of a large dataset

and the Data Analysis Experiment was to fit a model (and the parameter grid might specify a list of different models) and evaluate its performance.

flowchart LR DG[Data Generating Experiment] ERDG[Data Generating Experiment Results] DG--> ERDG DA[Data Analysis Experiment] ERDG --> DA

Setup an experiment#

We want to demonstrate a hypothetical simulation experiment that generates a whole dataset. For demonstration purposes we want code that is concise for the experiment, so we opt to simply randomly data of a size determined by the parameters

import pandas as pd
import numpy as np

def my_experiment(amplitude, n_rows, n_cols):
    '''
    a toy experiment that generates random data for a given number of rows 
    '''
    return pd.DataFrame(amplitude*np.random.rand(n_rows,n_cols), 
                        columns = ['x' + str(i) for i in range(n_cols)])

We can see this is a very simple experiment by calling it

my_experiment(3,5,2)
x0 x1
0 1.336771 2.064318
1 0.853108 2.828292
2 0.994010 0.582255
3 1.055447 2.977931
4 2.614019 2.952127

Create a Parameter Grid#

The keys of the parameter grid have to match the parameter names of your experiment function, but the values can be anything. For our toy expermiment we will use logarthmically spaced values for rows and linearly spaced number of columns with random amplitudes. The rows and columns have to be integers, so we will use np.floor to round them down.

rows_to_test = np.logspace(1,5,num = 5,base = 10.0)
cols_to_test = np.linspace(2,8,num=4)
amps = np.random.randn(5)*10

param_grid = {'n_cols':np.floor(cols_to_test).astype(int),
             'n_rows':np.floor(rows_to_test).astype(int),
             'amplitude':amps}
param_grid
{'n_cols': array([2, 4, 6, 8]),
 'n_rows': array([    10,    100,   1000,  10000, 100000]),
 'amplitude': array([ 3.67363272,  0.62550528, -9.47561484, -5.11964899, -0.86044842])}

Create a directory to save the results#

Be sure to create a folder for the results, before running your code. This does not need to be in your script. We have it here as an example and so that this page can build.

By default, Experiment.batch_run looks for a results folder in the current working directory, we are making one with that name here so that later we can rely on the default. However, your folder can be named anything, as long as you pass that value to the base_path parameter when you call run_batch

Hide code cell source

import os
if not(os.path.isdir('results')):
    os.mkdir('results')

Run the first experiment#

import the experiment object

from expttools import Experiment

and then instantiate and run the batch with defaults

my_expt = Experiment(my_experiment,param_grid)

Once we create it, we can run the batch job.

batchname, successes ,fails = my_expt.run_batch()

Look at the result directory structure#

%%bash
ls results/
my_experiment2025_10_07_00_38_06_015334

One of the returned values is the name of the folder the results are in.

batchname
'my_experiment2025_10_07_00_38_06_015334'
%%bash -s "$batchname"
ls results/$1
2_100000_-0.8604484169776738
2_100000_-5.119648991930229
2_100000_-9.475614842631877
2_100000_0.6255
052759307506
2_100000_3.673632715630717
2_10000_-0.8604484169776738
2_10000_-5.119648991930229
2_100
00_-9.475614842631877
2_10000_0.6255052759307506
2_10000_3.673632715630717
2_1000_-0.860448416977673
8
2_1000_-5.119648991930229
2_1000_-9.475614842631877
2_1000_0.6255052759307506
2_1000_3.67363271563
0717
2_100_-0.8604484169776738
2_100_-5.119648991930229
2_100_-9.475614842631877
2_100_0.62550527593
07506
2_100_3.673632715630717
2_10_-0.8604484169776738
2_10_-5.119648991930229
2_10_-9.4756148426318
77
2_10_0.6255052759307506
2_10_3.673632715630717
4_100000_-0.8604484169776738
4_100000_-5.119648991
930229
4_100000_-9.475614842631877
4_100000_0.6255052759307506
4_100000_3.673632715630717
4_10000_-0
.8604484169776738
4_10000_-5.119648991930229
4_10000_-9.475614842631877
4_10000_0.6255052759307506
4
_10000_3.673632715630717
4_1000_-0.8604484169776738
4_1000_-5.119648991930229
4_1000_-9.475614842631
877
4_1000_0.6255052759307506
4_1000_3.673632715630717
4_100_-0.8604484169776738
4_100_-5.1196489919
30229
4_100_-9.475614842631877
4_100_0.6255052759307506
4_100_3.673632715630717
4_10_-0.860448416977
6738
4_10_-5.119648991930229
4_10_-9.475614842631877
4_10_0.6255052759307506
4_10_3.673632715630717
6_100000_-0.8604484169776738
6_100000_-5.119648991930229
6_100000_-9.475614842631877
6_100000_0.6255
052759307506
6_100000_3.673632715630717
6_10000_-0.8604484169776738
6_10000_-5.119648991930229
6_100
00_-9.475614842631877
6_10000_0.6255052759307506
6_10000_3.673632715630717
6_1000_-0.860448416977673
8
6_1000_-5.119648991930229
6_1000_-9.475614842631877
6_1000_0.6255052759307506
6_1000_3.67363271563
0717
6_100_-0.8604484169776738
6_100_-5.119648991930229
6_100_-9.475614842631877
6_100_0.62550527593
07506
6_100_3.673632715630717
6_10_-0.8604484169776738
6_10_-5.119648991930229
6_10_-9.4756148426318
77
6_10_0.6255052759307506
6_10_3.673632715630717
8_100000_-0.8604484169776738
8_100000_-5.119648991
930229
8_100000_-9.475614842631877
8_100000_0.6255052759307506
8_100000_3.673632715630717
8_10000_-0
.8604484169776738
8_10000_-5.119648991930229
8_10000_-9.475614842631877
8_10000_0.6255052759307506
8
_10000_3.673632715630717
8_1000_-0.8604484169776738
8_1000_-5.119648991930229
8_1000_-9.475614842631
877
8_1000_0.6255052759307506
8_1000_3.673632715630717
8_100_-0.8604484169776738
8_100_-5.1196489919
30229
8_100_-9.475614842631877
8_100_0.6255052759307506
8_100_3.673632715630717
8_10_-0.860448416977
6738
8_10_-5.119648991930229
8_10_-9.475614842631877
8_10_0.6255052759307506
8_10_3.673632715630717
dependency_versions.txt

The other two returned values tell us about how many cases and which worked and failed

len(successes), len(fails)
(100, 0)

Load them back#

Now, we can load all of those datasets back in as an ExperimentResult objec that allows us to either analyze the results, or to pass them to the next experiment.

from expttools import ExperimentResult

we instantiate the result object with a top level directory where the results were saved.

my_results = ExperimentResult('results/'+batchname)

this object will list all of the results that succeeded

my_results.get_result_names()
['6_1000_0.6255052759307506',
 '4_100000_0.6255052759307506',
 '8_1000_-9.475614842631877',
 '8_100_-5.119648991930229',
 '2_10_-0.8604484169776738',
 '6_10000_0.6255052759307506',
 '8_10000_3.673632715630717',
 '6_1000_-0.8604484169776738',
 '2_100_3.673632715630717',
 '2_100000_0.6255052759307506',
 '6_100000_3.673632715630717',
 '4_1000_-5.119648991930229',
 '2_100_-9.475614842631877',
 '6_10_3.673632715630717',
 '4_10000_-5.119648991930229',
 '6_10000_3.673632715630717',
 '4_10000_0.6255052759307506',
 '4_100000_-0.8604484169776738',
 '4_100000_-5.119648991930229',
 '6_10_-0.8604484169776738',
 '6_100_3.673632715630717',
 '8_10000_-9.475614842631877',
 '2_100_-0.8604484169776738',
 '6_100000_0.6255052759307506',
 '2_10_-5.119648991930229',
 '2_1000_0.6255052759307506',
 '8_1000_-0.8604484169776738',
 '2_10_3.673632715630717',
 '2_10_-9.475614842631877',
 '2_100_-5.119648991930229',
 '4_10_-5.119648991930229',
 '6_10_-5.119648991930229',
 '8_100000_3.673632715630717',
 '6_100_0.6255052759307506',
 '4_10000_-0.8604484169776738',
 '6_100_-5.119648991930229',
 '6_10_0.6255052759307506',
 '6_10000_-0.8604484169776738',
 '4_1000_-0.8604484169776738',
 '8_100000_-5.119648991930229',
 '8_10_3.673632715630717',
 '4_10000_-9.475614842631877',
 '4_1000_-9.475614842631877',
 '6_100_-9.475614842631877',
 '4_100_-0.8604484169776738',
 '2_1000_3.673632715630717',
 '4_1000_3.673632715630717',
 '6_100000_-9.475614842631877',
 '2_100000_-5.119648991930229',
 '8_100_3.673632715630717',
 '6_1000_3.673632715630717',
 '8_100_-9.475614842631877',
 '2_10_0.6255052759307506',
 '6_1000_-5.119648991930229',
 '8_10000_0.6255052759307506',
 '4_1000_0.6255052759307506',
 '8_100000_-9.475614842631877',
 '8_1000_3.673632715630717',
 '4_100_3.673632715630717',
 '8_10000_-0.8604484169776738',
 '2_10000_-0.8604484169776738',
 '8_10_-0.8604484169776738',
 '8_100000_0.6255052759307506',
 '6_100000_-0.8604484169776738',
 '2_10000_-5.119648991930229',
 '4_10_-0.8604484169776738',
 '4_10000_3.673632715630717',
 '4_100000_-9.475614842631877',
 '4_100_-9.475614842631877',
 '6_100000_-5.119648991930229',
 '8_10_-5.119648991930229',
 '6_10000_-5.119648991930229',
 '4_10_0.6255052759307506',
 '2_1000_-9.475614842631877',
 '2_1000_-5.119648991930229',
 '8_10_-9.475614842631877',
 '4_100000_3.673632715630717',
 '6_100_-0.8604484169776738',
 '8_100_-0.8604484169776738',
 '8_100_0.6255052759307506',
 '2_100000_3.673632715630717',
 '4_10_-9.475614842631877',
 '6_1000_-9.475614842631877',
 '8_10_0.6255052759307506',
 '2_1000_-0.8604484169776738',
 '2_10000_0.6255052759307506',
 '6_10000_-9.475614842631877',
 '2_10000_-9.475614842631877',
 '8_1000_0.6255052759307506',
 '4_100_-5.119648991930229',
 '8_100000_-0.8604484169776738',
 '2_100_0.6255052759307506',
 '2_10000_3.673632715630717',
 '6_10_-9.475614842631877',
 '8_1000_-5.119648991930229',
 '8_10000_-5.119648991930229',
 '2_100000_-0.8604484169776738',
 '4_100_0.6255052759307506',
 '4_10_3.673632715630717',
 '2_100000_-9.475614842631877']

We can pull out only the parameters which could then merged with thte results of a second experiment if needed

my_results.get_info_df()
n_cols n_rows amplitude dir_name
1 6 1000 0.6255052759307506 6_1000_0.6255052759307506
1 4 100000 0.6255052759307506 4_100000_0.6255052759307506
1 8 1000 -9.475614842631877 8_1000_-9.475614842631877
1 8 100 -5.119648991930229 8_100_-5.119648991930229
1 2 10 -0.8604484169776738 2_10_-0.8604484169776738
... ... ... ... ...
1 8 10000 -5.119648991930229 8_10000_-5.119648991930229
1 2 100000 -0.8604484169776738 2_100000_-0.8604484169776738
1 4 100 0.6255052759307506 4_100_0.6255052759307506
1 4 10 3.673632715630717 4_10_3.673632715630717
1 2 100000 -9.475614842631877 2_100000_-9.475614842631877

100 rows × 4 columns

Passing a Dataset as a parameter#

For a whole dataset or other complex parameter, casting it to a string to produce run names will be confusing and hard to read. Here we describe three options to handle that the complex paramter when passing it to an Experiment so that it is neater.

Best Case: Pass file names#

The simplest way is for the experiment function to take a path. Passing the paths and having the the function load data from a path means that the path string can be used in the naming of the new results.

We can use the result object so that we can use the paths to the data files.

my_results.get_result_dirs()
['results/my_experiment2025_10_07_00_38_06_015334/6_1000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/8_1000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/6_1000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_1000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/2_1000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/8_1000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/4_1000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/4_1000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_1000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_1000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/6_1000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/6_1000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/4_1000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/8_1000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/2_1000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/2_1000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/6_100_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/6_1000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/2_1000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10000_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/8_1000_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_100000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/2_10000_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/6_10_-9.475614842631877',
 'results/my_experiment2025_10_07_00_38_06_015334/8_1000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/8_10000_-5.119648991930229',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100000_-0.8604484169776738',
 'results/my_experiment2025_10_07_00_38_06_015334/4_100_0.6255052759307506',
 'results/my_experiment2025_10_07_00_38_06_015334/4_10_3.673632715630717',
 'results/my_experiment2025_10_07_00_38_06_015334/2_100000_-9.475614842631877']

From Expt Tools output#

If you design an experiment that takes in a dataframe as a parameter and you are generating the dataframes with an Experiment, you can pass an ExperimentResult object in the parameter grid. When the new Experiment object is instantiated, it will expand that into a list of DataFrames and pass a single DataFrame to each experiment function call.

For example:

def stat_expt(dataset_df,cur_stat):
    '''
    a simple experiment that takes in a dataset
    '''
    return cur_stat(dataset_df).to_frame().rename(columns={0:'stat_value'})

def meanex(df):
    '''
    a function that takes in a dataframe
    '''
    return df.mean()

def stdex(df):
    '''
    a function that takes in a dataframe
    '''
    return df.std()


stat_fx_list = [meanex,stdex]

then we can create a param grid with the result object we created above and run the batch:

param_grid_er = {'dataset_df':my_results,
                    'cur_stat':stat_fx_list}
my_reuse_expt = Experiment(stat_expt,param_grid_er)
batchname_er, successes_er ,fails_er = my_reuse_expt.run_batch()

We can look at the fail/success counts to see that it works

len(successes_er) ,len(fails_er) 
(200, 0)

From memory#

If you want to use a dataset or other complex object that is already loaded into memory the best way to pass it is through an object that has a name or __name__ attribute.

You can write your own or use the convenience class provided :

from expttools import NamedObject

Then you can create the param grid with the simple wrapper using the NamedObject constructor.

param_grid_obj = {'dataset_obj':[ NamedObject('ex4',pd.DataFrame(np.random.rand(5),columns=['x0'])),
                    NamedObject('ex5',pd.DataFrame(np.random.rand(50))),
                    NamedObject('ex6',pd.DataFrame(np.random.rand(500)))],
                 'cur_stat':stat_fx_list}

and then make a version of the the experiment to use the value attribute of the object:

def stat_expt_obj(dataset_obj,cur_stat):
    '''
    a simple experiment that takes in a dataset
    '''

    return cur_stat(dataset_obj.value).to_frame().rename(columns={0:'stat_value'})

and finally you can set up and run your experiment

my_expt_obj = Experiment(stat_expt_obj,param_grid_obj)
batchname_obj, successes_obj ,fails_obj = my_expt_obj.run_batch()

We can look at the fail/success counts to see that it works

len(successes_obj) ,len(fails_obj) 
(6, 0)

A Note on Using NamedObject

In the NamedObject class we must use the following method to prevent errors when passing in functions that take in a substantial amount of parameter info.

def copy(self):
        return self

Omitting the copy method in your own helper function may result in unexpected behavior or runtime issues when passing in functions with higher-dimential. Ensure the presence of the copy method to maintain the integrity and reliability of your codebase.

Viewing results of a 2 stage Experiment#

We can load either of the two versions that we ran:

stat_results_obj = ExperimentResult('results/' + batchname_obj)

# Stack the results using the stack_results method
stacked_stat_results_obj = stat_results_obj.stack_results()

# Display the stacked results
stacked_stat_results_obj.head()
dataset_obj cur_stat dir_name stat_value
0 ex6 stdex ex6_stdex 0.289329
0 ex5 stdex ex5_stdex 0.303017
0 ex6 meanex ex6_meanex 0.508162
0 ex5 meanex ex5_meanex 0.523321
0 ex4 stdex ex4_stdex 0.200813
stat_results_er = ExperimentResult('results/' + batchname_er)

# Stack the results using the stack_results method
stacked_stat_results_er = stat_results_er.stack_results()

# Display the stacked results
stacked_stat_results_er.head()
dataset_df cur_stat n_cols n_rows amplitude dir_name stat_value
0 8_10000_-0.8604484169776738 stdex 8 10000 -0.8604484169776738 8_10000_-0.8604484169776738_stdex 0.247477
1 8_10000_-0.8604484169776738 stdex 8 10000 -0.8604484169776738 8_10000_-0.8604484169776738_stdex 0.248896
2 8_10000_-0.8604484169776738 stdex 8 10000 -0.8604484169776738 8_10000_-0.8604484169776738_stdex 0.247294
3 8_10000_-0.8604484169776738 stdex 8 10000 -0.8604484169776738 8_10000_-0.8604484169776738_stdex 0.249650
4 8_10000_-0.8604484169776738 stdex 8 10000 -0.8604484169776738 8_10000_-0.8604484169776738_stdex 0.249162