aboutsummaryrefslogtreecommitdiff
path: root/processing/chapter.tex
blob: 4f8f029c956429ebd0f322d2bed6997264f1c045 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
\chapter{Processing} \label{cha:pro}

\begin{dquote}
  What we have is data glut.
  What we really want is the ability to manipulate the information and to reach conclusions from
  it.
  I think we are at the point where that is slipping beyond unaided humans’ abilities.
  So the real thing to be looking for is processing schemes.
  One way is automatic processing: for instance, the sort of analysis that we saw with the IBM
  Watson on Jeopardy.
  Putting that in service to humankind in fields that are suffering from data glut at least gives
  people who are in charge the ability to keep some sort of track of what is going on.

  The other great thing that we have going for us is that we have billions of very intelligent
  people out there in the world.
  :With the networking that we have now, we’re beginning to see that those large populations,
  coordinating amongst themselves, are an intellectual resource that trumps all institutional
  intellectual resources and has a real possibility, if it’s supported by the proper automation, of
  creating solutions to problems, including the problem of the data glut.

  \dsignature{Vernor Vinge \cite{VingeVernor2012a}}
\end{dquote}

\clearpage

From a data science perspective, CMDS has several unique challenges:
\begin{ditemize}
  \item Dimensionality of datasets can typically be greater than two, complicating
    \textbf{representation}.
  \item Shape and dimensionality change...
  \item Data can be large (over one million points).  % TODO: contextualize large (not BIG DATA)
\end{ditemize}
I have designed a software package that directly addresses these issues.  %

WrightTools is a software package at the heart of all work in the Wright Group.  %

% TODO: more intro

\section{Introduction to WrightTools}  % ==========================================================

WrightTools is written in Python, and endeavors to have a ``pythonic'', explicit and ``natural''
application programming interface (API).  %
To use WrightTools, simply import:
\begin{codefragment}{python}
>>> import WrightTools as wt
>>> wt.__version__
3.0.0
\end{codefragment}
I'll discuss more about how exactly WrightTools packaging, distribution, and instillation works in
\autoref{sec:processing_distbribution}.

We can use the builtin Python function \python{dir} to interrogate the contents of the
WrightTools package.  %
\begin{codefragment}{python}
>>> dir(wt)
['Collection',
 'Data',
 '__branch__',
 '__builtins__',
 '__cached__',
 '__doc__',
 '__file__',
 '__loader__',
 '__name__',
 '__package__',
 '__path__',
 '__spec__',
 '__version__',
 '__wt5_version__',
 '_dataset',
 '_group',
 '_open',
 '_sys',
 'artists',
 'collection',
 'data',
 'diagrams',
 'exceptions',
 'kit',
 'open',
 'units']
\end{codefragment}  % TODO: consider adding fit to this list
Many of these are dunder (double underscore) attributes---Python internals that are not normally
used directly.  %
The ten attributes that do not start with underscore are the public API that users of WrightTools
typically use.  %
Within the public API are two classes, \python{Collection} \&
\python{Data}, which are the two main classes in the WrightTools object model.  %
\python{Data} stores spectra directly as multidimensional arrays, and
\python{Collection} stores \textit{groups} of data objects (and other collection
objects) in a hierarchical way for internal organization purposes.  %

WrightTools uses a programming strategy called object oriented programming (OOP).  %
% TODO: introduce HDF5
% TODO: elaborate on the concept of OOP and how it relates to WrightTools

It contains a central data ``container'' that is capable of storing all of the information about
each multidimensional (or one-dimensional) spectra: the \python{Data} class.  %
It also defines a \python{Collection} class that contains data objects, collection
objects, and other pieces of metadata in a hierarchical structure.  %
Let's first discuss \mitinline{python}{Data}.

All spectra are stored within WrightTools as multidimensional arrays.  %
Arrays are containers that store many instances of the same data type, typically numerical
datatypes.  %
These arrays have some \python{shape}, \python{size}, and
\python{dtype}.  %
In the context of WrightTools, they can contain floats, integers, complex numbers and NaNs.  %

The \python{Data} class contains everything that is needed to define a single spectra
from a single experiment (or simulation).  %
To do this, each data object contains several multidimensional arrays (typically 2 to 50 arrays,
depending on the kind of data).  %
There are two kinds of arrays, instances of \python{Variable} and \python{Channel}.  %
Variables are coordinate arrays that define the position of each pixel in the multidimensional
spectrum, and channels are each a particular kind of signal within that spectrum.  %
Typical variables might be \python{[w1, w2, w3, d1, d2]}, and typical channels
\python{[pmt, pyro1, pyro2, pyro3]}.  %

As an overview, the following lexicographically lists the attributes and methods of
\python{Data}.  %
\begin{ditemize}
  \item method \python{collapse}: Collapse along one dimension in a well-defined way.
  \item method \python{convert}: Convert all axes of a certain kind.
  \item method \python{create_channel}: Create a new channel.
  \item method \python{create_variable}: Create a new variable.
  \item method \python{fullpath}
  \item method \python{get_nadir}
  \item method \python{get_zenith}
  \item method \python{heal}
  \item attribute \python{kind}
  \item method \python{level}
  \item method \python{map_variable}
  \item attribute \python{natural_name}
  \item attribute \python{ndim}
  \item method \python{offset}
  \item method \python{print_tree}
  \item method \python{remove_channel}
  \item method \python{remove_variable}
  \item method \python{rename_channels}
  \item method \python{rename_variables}
  \item attribute \python{shape}
  \item method \python{share_nans}
  \item attribute \python{size}
  \item method \python{smooth}
  \item attribute \python{source}
  \item method \python{split}
  \item method \python{transform}
  \item attribute \python{units}
  \item attribute \python{variable_names}
  \item attribute \python{variables}
  \item method \python{zoom}
\end{ditemize}

Each data object contains instances of \python{Channel} and \python{Variable} which represent the
principle multidimensional arrays.  %
The following lexicographically lists the attributes of these instances.  %
Certain methods and attributes are unique to only one type of dataset, and are marked as such.  %
\begin{ditemize}
  \item method \python{argmax}
  \item method \python{argmin}
  \item method \python{chunkwise}
  \item method \python{clip}
  \item method \python{convert}
  \item attribute \python{full}
  \item attribute \python{fullpath}
  \item attribute \python{label} (variable only)
  \item method \python{log}
  \item method \python{log10}
  \item method \python{log2}
  \item method \python{mag}
  \item attribute \python{major_extent} (channel only)
  \item method \python{max}
  \item method \python{min}
  \item attribute \python{minor_extent} (channel only)
  \item attribute \python{natural_name}
  \item method \python{normalize} (channel only)
  \item attribute \python{null} (channel only)
  \item attribute \python{parent}
  \item attribute \python{points}
  \item attribute \python{signed} (channel only)
  \item method \python{slices}
  \item method \python{symmetric_root}
  \item method \python{trim} (channel only)
\end{ditemize}
Channels and variables also support direct indexing / slicing using \python{__getitem__}, as
discussed more in...  % TODO: where is it discussed more?
 
Axes are ways to organize data as functional of particular variables (and combinations thereof).  %
The \python{Axis} class does not directly contain the respective arrays---it merely refers to the
associated variables.  %
The flexibility of this association is one of the main new features in WrightTools 3.  %
It enables data transformation, discussed in section ...  % TODO: link to section
Axis expressions are simple human-friendly strings made up of numbers and variable
\python{natural_name}s.  %
Given 5 variables with names \python{['w1', 'w2', 'wm', 'd1', 'd2']}, example valid expressions
include \python{'w1'}, \python{'w1=wm'}, \python{'w1+w2'}, \python{'2*w1'}, \python{'d1-d2'}, and
\python{'wm-w1+w2'}.  %
Axes can be directly indexed / sliced into using \python{__getitem__}, and they support many of the
``numpy-like'' attributes.  %
A lexicographical list of axis attributes and methods follows.
\begin{ditemize}
  \item attribute \python{full}
  \item attribute \python{label}
  \item attribute \python{natural_name}
  \item attribute \python{ndim}
  \item attribute \python{points}
  \item attribute \python{shape}
  \item attribute \python{size}
  \item attribute \python{units_kind}
  \item attribute \python{variables}
  \item method \python{convert}
  \item method \python{min}
  \item method \python{max}
\end{ditemize}  % TODO: actually lexicographical

\section{Creating a data object}  % ===============================================================

WrightTools data objects are capable of storing arbitrary multidimensional spectra, but how can w
actually get data into WrightTools?  %
If you start with a wt5 file, the answer is easy: \python{wt.open(<filepath>)}.  %
But what if you have data that was written using some other software?  %
WrightTools offers data conversion functions (``from'' functions) that do the hard work of creating
data objects from other files.  %
These from-functions are as parameter free as possible, which means they recognize details like
shape and units from each specific file format without manual user intervention.  %

The most important thing about from-functions is that they are extensible: that is, that more
from-functions can be easily added as needed.  %
This modular approach to data creation means that individuals who want to use WrightTools for new
data sources can simply add one function to unlock the capabilities of the entire package as
applied to their data.  %

Following are the current from-functions, and the types of data that they support.
\begin{ditemize}
  \item Cary (collection creation)
  \item COLORS
  \item KENT
  \item PyCMDS
  \item Ocean Optics
  \item Shimadzu
  \item Tensor27
\end{ditemize}  % TODO: complete list, update wright.tools to be consistent
  
\subsubsection{Discover dimensions}

Certain older Wright Group file types (COLORS and KENT) are particularly difficult to import using
a parameter-free from-function.  %
There are two problems:
\begin{ditemize}
  \item Dimensionality limitation to individual files (1D for KENT, 2D for COLORS).
  \item Lack of self-describing metadata (headers).
\end{ditemize}
The way that WrightTools handles data creation for these file-types deserves special discussion.  %

Firstly, WrightTools contains hardcoded column information for each filetype.
Data from Kent Meyer's ``picosecond control'' software had consistent columns over the lifetime of
the software, so only one dictionary is needed to store these correspondences.  %
Skye Kain's ``COLORS'' software used at least 7 different formats, and unfortunately these format
types were not fully documented.  %
WrightTools attempts to guess the COLORS data format by counting the number of columns.  %

Because these file-types are dimensionality limited, there are many acquisitions that span over
multiple files.  %
COLORS offered an explicit queue manager which allowed users to repeat the same 2D scan (often a
Wigner scan) many times at different coordinates in non-scanned dimensions.  %
ps\_control scans were done more manually.  %
To account for this problem of multiple files spanning a single acquisition, the functions
\python{from_COLORS} and \python{from_KENT} optionally accept \emph{lists} of filepaths.  %
Inside the function, WrightTools simply appends the arrays from all given files into one long array
with many more rows.  %

The final and most challenging problem of parameter-free importing for these filetypes is
\emph{dimensionality recognition}.  %
Because the files contain no metadata, the shape and coordinates of the original acquisition must
be guessed by simply inspecting the columnar arrays.  %
In general, this problem can become very hard.  %
Luckily, each of these previous instrumental software packages was only used on one instrument with
limited flexibility in acquisition type, so it is possible to make educated guesses for almost all
acquisitions.  %

The function \python{wt.kit.discover_dimensions} handles the work of dimensionality recognition for
both COLORS and ps\_control arrays.  %
This function may be used for more filetypes in the future.  %
Roughly, the function does the following:
\begin{denumerate}
  \item Remove dimensions containing nan(s).
  \item Find which dimensions are equal (within tolerance), condense into single dimensions.
  \item Find which dimensions are scanned (move beyond tolerance).
  \item For each scanned dimension, find how many unique (outside of toelerance) points were taken.
  \item Linearize each scanned dimension between smallest and largest unique point.
  \item Return scanned dimension names, column indices and points.
\end{denumerate}
The \python{from_COLORS} and \python{from_KENT} functions then linearly interpolate each row in the
channels onto the grid defined by \python{discover_dimensions}.  %
This interpolation uses \python{scipy.interpolate.griddata}, which in turn relies upon the C++
library Qhull.  %

This strategy can be copied in the future if other non-self-describing data sources are added into
WrightTools.  %

\section{Collections}  % ==========================================================================

The WrightTools \python{Collection} class is a container class meant to organize the contents of
the wt5 file.  %
It can contain other collection instances and data objects.  %
Conceptually, it behaves like a folder in a traditional file-system.  %
\python{wt.Collection} is a child of \python{h5py.Group} \cite{h5py.Group}.

The primary attributes and methods of \python{Collection} are
\begin{ditemize}
  \item attribute item_names
  \item attribute \python{fullpath}
\end{ditemize}
% TODO: finish adding attributes and methodsd  

Collections are useful because they allow WrightTools users to ``carry around'' several associated
data objects in the same file.  %
For example, a publication might contain several experiments on the same sample.  %
Collections allow such experiments to be organized in a hierarchical way.  %
The hierarchy of contents that a collection contains can be easily visualized using the
\python{print_tree} method.  %
As an example, consider the following collection instance which contains some experiments
accomplished on neat carbon tetrachloride.  %
\begin{codefragment}{bash}
>>> import WrightTools as wt
>>> root = wt.open('CCl4.wt5')
>>> root.print_tree()
CCl4 (/tmp/0tze7b8a.wt5)
├── 0: delay (111,)
│   ├── axes: d1 (fs)
│   └── channels: ai0, ai1, ai2, ai3
└── 1: frequency
    ├── 0: delay_0 (51, 51)
    │   ├── axes: w2 (eV), w1=wm (eV)
    │   └── channels: ai0, ai1, ai2, ai3, ai4, mc
    └── 1: delay_200 (18, 20)
        ├── axes: w1=wm (eV), w2 (eV)
        └── channels: ai0, ai1, ai2, ai3
\end{codefragment}
Looking at the output of \python{print_tree}, we can see that this collection (named \python{CCl4})
contains the following:
\begin{denumerate}
  \item A data object ``\python{delay}'', shape \python{(111,)}.
  \item A collection object ``\python{frequency}'', containing two 2D data objects.
    \begin{denumerate}
      \item A data object ``\python{delay_0}'', shape \python{(51, 51)}.
      \item A data object ``\python{delay_200}'', shape \python{18, 20}.
    \end{denumerate}
\end{denumerate}
Since this is all contained in one file, a user of WrightTools can easily manage all three
associated datasets.  %
Upon simple inspection it is obvious that two of the datasets are 2D frequency-frequency scans
while one is a 1D delay slice.  %

Like \python{Channel}, \python{Data} and \python{Variable}, \python{Collection} supports adding
arbitrary metadata through the \python{attrs} dictionary.  % TODO: cite

\subsection{From directory}  % --------------------------------------------------------------------

The \python{wt.collection.from_directory} function can be used to automatically import all of the
data sources in an entire directory tree.  %
It returns a WrightTools collection with the same internal structure of the directory tree, but
with WrightTools data objects in the place of raw data source files.  %
Users can configure which files are routed to which from-function.  %

% TODO (also document on wright.tools)

\section{Visualizing a data object}  % ============================================================

After importing and manipulating data, one typically wants to create a plot.  %
The artists sub-package contains everything users need to plot their data objects.  %
This includes both ``quick'' artists, which generate simple plots as quickly as possible, and a
full figure layout toolkit that allows users to generate full publication quality figures.  %
It also includes ``specialty'' artists which are made to perform certain popular plotting
operations, as I will describe below.  %

Currently the artists sub-package is built on-top of the wonderful matplotlib library.  %
In the future, other libraries (e.g. Mayavi \cite{Mayavi}), may be incorporated.  %

\subsection{Strategies for 2D visualization}  % ---------------------------------------------------

Representing two-dimensional data is an important capability for WrightTools, so some special
discussion about how such representations work is warranted.  %
WrightTools data is typically very structured, with values recorded at a grid of positions.  %
To represent two-dimensional data, then, WrightTools needs to map the values onto a color axis.  %
There are better and worse choices of colormap... % TODO: elaborate

\subsubsection{Colormap}

\begin{figure}
  \includegraphics[scale=0.5]{"processing/wright_cmap"}
  \includegraphics[scale=0.5]{"processing/cubehelix_cmap"}
  \includegraphics[scale=0.5]{"processing/viridis_cmap"}
  \includegraphics[scale=0.5]{"processing/default_cmap"}
  \caption[CAPTION TODO]{
    CAPTION TODO}
  \label{pro:fig:cmaps}
\end{figure}

\begin{figure}
  \includegraphics[width=\textwidth]{"processing/cmap_comparison"}
  \caption[CAPTION TODO]{
    CAPTION TODO}
  \label{pro:fig:cmap_comparison}
\end{figure}

\autoref{pro:fig:cmaps} shows the red, green, and blue components of four different colormaps.  %
The black line is the net intensity of each color (larger value means lighter color).  %
Below each figure is a gray-scale representation of the corresponding colormap.  %
The r, g, and b components are scaled according to human perception.  % TODO: values, from where
The traditional Wright Group colormap (derived from jet) is shown first.  % TODO: cite jet
It is not perceptual...  % TODO: define perceptual
Following are two perceptual colormaps, cubehelix from Green  % TODO: cite
and viridis, the new matplotlib default  % TODO: cite
WrightTools uses the algorithm from Green to define a custom cubehelix colormap with good
perceptual properties and familiar Wright Group coloration.  %

% TODO: figure like one on wall

% TODO: mention isoluminant

\subsubsection{Interpolation type}

WrightTools data is defined at discrete points, but an entire 2D surface must be defined in order
to make a full colored surface.  %
Defining this surface requires \emph{interpolation}, and there are various strategies that have
different advantages and disadvantages.  %
Choosing the wrong type of interpolation can be misleading.  %

In the multidimensional spectroscopy community, the most popular form of interpolation is based on
deulaney...

\begin{figure}
  \includegraphics[width=\textwidth]{"processing/fill_types"}
  \caption[CAPTION TODO]{
    CAPTION TODO}
  \label{pro:fig:fill_types}
\end{figure}

\subsection{Quick}  % -----------------------------------------------------------------------------

To facilitate easy visualization of data, WrightTools offers ``quick'' artist functions which
quickly generate 1D or 2D representations.  %
These functions are made to make good representations by default, but they do have certain keyword
arguments to make popular customization easy.  %
These are particular useful functions within the context of repls and auto-generated plots in
acquisition software.  %

Default outputs of \python{wt.artists.quick1D} and \python{wt.artists.quick2D} are shown in
\autoref{pro:fig:quick1D} and \autoref{pro:fig:quick2D}, respectively.  %
The full script used to create each image is included in the Figures.  %
Note that the actual quick functions are each one-liners, and that the supplied keyword arguments
are necessary only because the images are being saved (not typical for users in interactive
mode).  %

Perhaps the most powerful feature of \python{quick1D} and \python{quick2D} are their ability to
treat higher-dimensional datasets by automatically generating multiple figures.  %
When handing a dataset of higher dimensionality to these artists, the user may choose which axes
will be plotted against using keyword arguments.  %
Any axis not plotted against will be iterated over such that an image will be generated at each
coordinate in that axis.  %
Users may also provide a dictionary with entries of the form
\python{{axis_name: [position, units]}} to choose a single coordinate along non-plotted axes.  %
These functionalities are derived from \python{wt.Data.chop}, discussed further in...  % TODO: link

\begin{figure}
  \includegraphics[width=0.5\textwidth]{"processing/quick1D 000"}
  \includepython{"processing/quick1D.py"}
  \caption[CAPTION TODO]{
    CAPTION TODO}
  \label{pro:fig:quick1D}
\end{figure}

\begin{figure}
  \includegraphics[width=0.5\textwidth]{"processing/quick2D 000"}
  \includepython{"processing/quick2D.py"}
  \caption[CAPTION TODO]{
    CAPTION TODO}
  \label{pro:fig:quick1D}
\end{figure}

% TODO: signed data (with and without dynamic_range=True)

\subsection{Specialty}   % ------------------------------------------------------------------------

\subsection{API}  % -------------------------------------------------------------------------------

The artists sub-package offers a thin wrapper on the default matplotlib object-oriented figure
creation API.  %
The wrapper allows WrightTools to add the following capabilities on top of matplotlib:
\begin{ditemize}
  \item More consistent multi-axes figure layout.
  \item Ability to plot data objects directly.
\end{ditemize}
Each of these is meant to lower the barrier to plotting data.  %
Without going into every detail of matplotlib figure generation capabilities, this section
introduces the unique strategy that the WrightTools wrapper takes.  %

\subsection{Gotchas}  % ---------------------------------------------------------------------------

% TODO: mention gotcha of apparently narrowing linewidths with wigners (how to READ colormaps)

\section{Variables and channels}  % ===============================================================

Data objects are made up of many component channels and variables, each array having the same
dimensionality of its parent data.  %
This strategy allows for maximal flexibility in data representation, but it can be overly expensive
if certain arrays do not actually change against all of the dimensions.  %
This is often especially true with variables, which typically correspond to scannable hardware that
may not have even moved across some (or any) dimensions.  %
To avoid unnecessarily large arrays, WrightTools allows Channels and Variables to have different
sizes than the parent data.  %
As an example, consider the following object.
\begin{codefragment}{bash}  % TODO: need to use bash here because of box charachters :-(
>>> import WrightTools as wt; from WrightTools import datasets
>>> data = wt.data.from_COLORS(datasets.COLORS.v2p1_MoS2_TrEE_movie)
>>> data.print_tree()
MoS2 (/tmp/qhg_1b3l.wt5)
├── axes
│   ├── 0: w2 (nm) (41, 1, 1)
│   ├── 1: w1=wm (nm) (1, 41, 1)
│   └── 2: d2 (fs) (1, 1, 23)
├── variables
│   ├── 0: w2 (nm) (41, 1, 1)
│   ├── 1: w1 (nm) (1, 41, 1)
│   ├── 2: wm (nm) (1, 41, 1)
│   ├── 3: d2 (fs) (1, 1, 23)
│   ├── 4: w3 (nm) (1, 1, 1)
│   ├── 5: d0 (fs) (1, 1, 1)
│   └── 6: d1 (fs) (1, 1, 1)
└── channels
    ├── 0: ai0 (41, 41, 23)
    ├── 1: ai1 (41, 41, 23)
    ├── 2: ai2 (41, 41, 23)
    ├── 3: ai3 (41, 41, 23)
    ├── 4: ai4 (41, 41, 23)
    └── 5: mc (41, 41, 23)
\end{codefragment}
Note that this is the primary dataset discussed in \autoref{cha:mx2}.  %
The shape of this data object is \python{(41, 41, 23)}, but none of the variables have that full
shape.  %
From a quick inspection, one can see that \python{w1} and \python{wm} were scanned together, while
\python{w2} and \python{d2} were the other two dimensions.  %
\python{w3}, \python{d0}, and \python{d1} were not moved at all, yet their coordinates are still
propagated.  %

\section{Axes}  % =================================================================================

The axes have the joint shape of their component variables.  %
Although not shown in this example, channels also may have axes with length 1.

Axes, variables, and channels are array-likes, so they support slicing operations.  %
In addition, all three classes have \python{points} and \python{full} attributes that return the
squeezed and broadcasted array, respectively.  %

\begin{figure}
  \includegraphics[width=\textwidth]{"processing/fringes_transform"}
  \includepython{"processing/fringes_transform.py"}
  \caption[CAPTION TODO]{
    CAPTION TODO}
\end{figure}

\section{Math}  % =================================================================================

Now that we know the basics of how the WrightTools \python{Data} class stores data, it's time to do
some data manipulation.  %
Let's start with some elementary algebra.  %

% TODO: mention chunkwise strategy

\subsection{In-place operators}  % ----------------------------------------------------------------

In Python, operators are symbols that carry out some computation.  %
Consider the following:
\begin{codefragment}{python, label=pro:lst:array_addition}
>>> import numpy as np
>>> a = np.array([4, 5, 6])
>>> b = np.array([-1, -2, -3])
>>> c = a + b
>>> c
array([3, 3, 3])
\end{codefragment}
Here, \python{a} and \python{b} are operands and \python{+} is an operator.  %
When used in this simple way, operators typically create and return a \emph{new} object in the
computers memory.  %
We can verify this by using Python's built-in \python{id} function on the objects created in
\ref{pro:lst:array_addition}.  %
\begin{codefragment}{python}
>>> id(a), id(b), id(c)
(139712529580400, 139712333712320, 139712333713040)
\end{codefragment}
This is usually fine, but sometimes the operands are unwieldy large objects that take a lot of
memory to store.  %
In other cases operators are used millions of times such that, used as above, millions of new
arrays will be created.  %

One way to avoid these problems is to use \emph{in-place} operators.  %
Using a slightly different syntax, one can tell Python to overwrite one of the operands with the
new value. %
Continuing from \ref{pro:lst:array_addition}:
\begin{codefragment}{python, label=pro:lst:in_place_addition}
>>> a += b
>>> a
array([3, 3, 3])
\end{codefragment}
No output \python{c} array was created, so no additional memory footprint is needed in
\ref{pro:lst:in_place_addition}.  %
Since WrightTools channels and variables are typically large arrays, and since these arrays are
stored on disk inside of a larger file, WrightTools requires the use of in-place operators for all
normal math.  %
Currently WrightTools supports addition (\python{+=}), multiplication(\python{*=}),
power (\python{**=}), subtraction (\python{-=}), and division (\python{/=}).  %
As an example, consider dividing a channel by a specific factor:
\begin{codefragment}{python}
>>> import WrightTools as wt; from WrightTools import datasets
>>> data = wt.data.from_JASCO(datasets.JASCO.PbSe_batch_1)
data.created at /tmp/tdyvfxu8.wt5::/
  range: 2500.0 to 700.0 (nm)
  size: 1801
>>> data.signal
<WrightTools.Channel 'signal'' at /tmp/tdyvfxu8.wt5::/signal>
>>> data.signal.min(), data.signal.max()
(0.10755, 1.58144)
>>> data.signal /= 2
>>> data.signal.max(), data.signal.min()
(0.053775, 0.79072)
\end{codefragment}
Variables also support in-place operators.  %

\subsection{Clip}  % ------------------------------------------------------------------------------

Clip allows users to exclude values outside of a certain range.  %
This can be particularly useful in cases like fitting.  %
See section ... for an example.  % TODO: link to section

It's also useful for when noise in a certain region of a spectrum obscures useful data...
Particularly true for normalized and signed data.  %

\subsection{Symmetric root}  % --------------------------------------------------------------------

Homodyne vs heterodyne-detected data need to be scaled appropriately for comparison.  %
Much of the data that we collect in the Wright Group is homodyne detected, so it goes as $N^2$.  %
To compare with the majority of other experiments, including basic linear experiments like
absorption and Raman spectroscopy, need to plot on ``amplitude level'', that is
$\mathsf{amplitude=\sqrt{signal}}$.  %

Due to things like leveling, chopping, baseline subtraction, and simple noise even homodyne
detected data typically include negative numbers.  %
Symmetric root treats these values as cleanly as possible by applying the same relative scaling to
positive and negative values, and keeping the sign of each pixel, as the following psudocode
shows.  %
\begin{codefragment}{python}
def symmetric_root(value):
    return sign(value) * sqrt(abs(value))
\end{codefragment}

For generality, \python{wt.Channel.symmetric_root} accepts any root as an argument.  %
The default is 2, for the common case of going from intensity scaling to amplitude scaling.  %

Any other power can be applied to a channel using the in-place \python{**=} syntax.  %

\subsection{Log}  % -------------------------------------------------------------------------------

The method \python{wt.Channel.log} applies logarithmic scaling to a channel.  %
The base of the log is settable by keyword argument, with a default of $\me$.  %
There are also methods \python{wt.Channel.log10} and \python{wt.Channel.log2}, which accept no
keyword arguments.  %
These may be slightly faster than \python{channel.log(base=10)} and
\python{channel.log(base=2)}.  %

\subsection{Level}  % -----------------------------------------------------------------------------

% TODO: figure from wright.tools

\subsection{Trim}  % ------------------------------------------------------------------------------

Trim uses statistical treatment to find and remove outliers from a dataset.  %
It is useful in cases where the naive strategy employed by \python{wt.Channel.clip} is not
sufficient, and when preparing for fitting.  %

Currently \python{trim} only supports one statistical treatment: the z-test.  %
Z-testing compares each pixel to its multidimensional neighborhood of pixels.  %
If the pixel is more than $n$ standard deviations outside of the neighborhood mean (using the
neighborhood standard deviation) it is either masked, replaced with \python{np.nan}, or replaced
with the neighborhood mean.  %
All outliers are found before any outliers are modified, so the algorithm is not directional.  %

% TODO: z-test citation

\python{wt.Channel.trim} can easily be enhanced with other statistical methods as needed.  %

\subsection{Smooth}  % ----------------------------------------------------------------------------

\python{wt.Channel.smooth} essentially passes the channel through a low-pass filter.  %
It does this by convolving the channel with an n-dimensional Kaiser–Bessel window.  %

% TODO: define Kaiser window
% TODO: citations
% TODO: motivate use of Kaiser window over other choices

Smoothing is a highly destructive process, and can be very dangerous if used unthinkingly.  %
However it can be useful when noisy data is collected in high resolution.  %
By taking many more pixels than required to capture the relevant spectral or temporal features, one
can confidently smooth collected data in post to achieve clean results.  %
This strategy is similar to that accomplished in time domain CMDS where a low-pass filter is
applied on the very high resolution raw data.  %

\section{Dimensionality manipulation}  % ==========================================================

WrightTools offers several strategies for reducing the dimensionality of a data object.  %
Also consider using the fit sub-package.  % TODO: more info, link to section

\subsection{Chop}  % ------------------------------------------------------------------------------

Chop is one of the most important methods of data, although it is typically not called directly by
users of WrightTools.  %
Chop takes n-dimensional data and ``chops'' it into all of it's lower dimensional components.  %
Consider a 3D dataset in \python{('wm', 'w2''', 'w1''''')}.  %
This dataset can be chopped to it's component 2D \python{('wm'', 'w1')} spectra.  %
\begin{codefragment}{python, label=test_label}
>>> import WrightTools as wt; from WrightTools import datasets
>>> data = wt.data.from_PyCMDS(datasets.PyCMDS.wm_w2_w1_000)
data created at /tmp/lzyjg4au.wt5::/
  axes ('wm', 'w2', 'w1')
  shape (35, 11, 11)
>>> chopped = data.chop('wm', 'w1')  
chopped data into 11 piece(s) in ('wm', 'w1')
>>> chopped.chop000
<WrightTools.Data 'chop000' ('wm', 'w1') at /tmp/935c2v5a.wt5::/chop000>
\end{codefragment}
\python{chopped} is a collection containing 11 data objects: \python{chop000, chop001 ...
  chop010}.  %
Note that, by default, the collection is made at the root level of a new tempfile.  %
An optional keyword argument \python{parent} allows users to specify the destination for this new
collection.   %
These lower dimensional data objects can then be used in plotting routines, fitting routines etc.  %

By default, chop returns \emph{all} of the lower dimensional slices.  %
Considering the same data object from \autoref{test_label}, we can choose to get all 1D wm
slices.  %
\begin{codefragment}{python}
>>> chopped = data.chop('wm')
chopped data into 121 piece(s) in ('wm',)
>>> chopped.chop000
<WrightTools.Data 'chop000' ('wm',) at /tmp/pqkbc0qr.wt5::/chop000>
\end{codefragment}

If desired, users may use the \python{at} keyword argument to specify a particular coordinate in
the un-retained dimensions.  %
For example, suppose that you want to plot the data from \ref{test_label} as an wm, w1 plot at
w2 = 1580 wn.  %
\begin{codefragment}{python}
>>> chopped = data.chop('wm', 'w1', at={'w2': [1580, 'wn']})[0]
chopped data into 1 piece(s) in ('wm', 'w1')
>>> chopped
<WrightTools.Data 'chop000' ('wm', 'w1') at /tmp/_yhrdprp.wt5::/chop000>
>>> chopped.w2.points
array([1580.0])
\end{codefragment}
Note the [0]...  % TODO
This same syntax used in artists...  % TODO

\subsection{Collapse}  % --------------------------------------------------------------------------

\python{wt.Data.collapse} reduces the dimensionality of the data object by exactly 1 using some
mathematical operation.  %
Currently supported methods are integrate, average, sum, max, and min, with integrate as
default.  %
Collapsing a dataset is a very simple and powerful method of dimensionality reduction.  %
It allows users to inspect the net dependency along a set of axes, without being opinionated about
the coordinate in other dimensions.  %
It can also be used as a method of noise reduction.  %

\subsection{Split}  % -----------------------------------------------------------------------------

\python{wt.Data.split} is not a proper method of dimensionality reduction, but it is a crucial tool
for interacting with the dimensionality of a data object.  %
\python{split} allows users to access a portion of the dataset.  %
The most common use-case is certainly in fitting operations.  %
In population spectroscopies like transient absorption and transient grating it has become typical
to take three-dimensional ``movies'' in \python{('w1', 'w2', 'd2')}, where \python{w1} is a probe,
\python{'w2'} is a pump, and \python{'d2'} is a population delay.  %
It can be informative to fit each \python{d2} trace to a model (often single exponential), but such
a fit will not do well to describe the signal through zero delay and for positive \python{d2}
values (into the coherence pathways).  %
\python{data.split(d2=0.)} will return two data objects, one for the positive delays and one for
negative.  %
You can then pass the data object with only population response into your fitting routine.  %

\subsection{Join}  % ------------------------------------------------------------------------------

Like \python{split}, \python{wt.data.join} is not a method of dimensionality reduction.  %
It is also not a method of the \python{Data} class, it is a bare function.  %
Join accepts multiple data objects and attempts to join them together.  %
To do this, the variable and channel names must agree.  %

% TODO: join example

\section{Fitting}  % ==============================================================================

Like the rest of WrightTools, the \python{fit} sub-package is made to play as nicely as possible
with high-dimensional data.  %
WrightTools uses fitting as a method of dimensionality reduction.  %
For example, consider a three-dimensional \python{('w1', 'w2', 'd2')} ``movie'', where \python{d2}
is a population delay that can be well approximated by a single exponential decay with offset.  %
Rather than attempt to visualize \python{w1, w2} at some specific value of \python{d2}, it can be
powerful to instead consider the parameters (amplitude, offset, and time constant) of an
exponential fit at each \python{w1, w2} coordinate.  %
On a more practical note, this kind of slice-by-slice dimensionality reduction via fitting can
greatly simplify automated instrumental calibration (see ...)  % TODO: link to opa chapter
WrightTools employs some simple tricks to enable these kind of fit operations, described here.  %

% TODO: consider inserting figures that demonstrate this story (need to use wt2?)

\subsection{Function objects}  % ------------------------------------------------------------------

One challenge of slice-by-slice fitting is making a good intial guess to optimize from.  %
It is not tractable to ask the user to provide a guess for each slice, so some kind of reasonable
automated guessing must be used.  %
WrightTools ``function'' objects are self contained describers of a particular function.  %
As an example, consider the \python{wt.fit.Expontial} class...
It has parameters...
Fit...
Evaluate...
Guess...

Can be used directly...

\subsection{Fitter}  % ----------------------------------------------------------------------------

Loops through...
Returns model and outs...

\section{Construction and maintenance}  % =========================================================

\subsection{Collaborative development}  % ---------------------------------------------------------

\subsection{Version control}  % -------------------------------------------------------------------

\subsection{Unit tests}  % ------------------------------------------------------------------------

\section{Distribution and licensing} \label{pro:sec:disbribution}  % ==============================

WrightTools is MIT licensed.  %

WrightTools is distributed on PyPI and conda-forge.

\section{Future directions}  % ====================================================================

Single variable decomposition.