Friday, January 19, 2018

Research Note #13 - WRF-ARW, NetCDF 3.6.3 and ARWPost Installation

One thing I would anxiously like to check since beginning WRF-CHEM study is whether the output could be post-processed by ARWPost, one of the original post-processor of WRF-ARW. While ARWPost could only provide limited diagnostics of the model output compared to UPP, it's definitely much more simple and straightforward. Moreover, the failure of UPP to process the chemical fields from WRF-CHEM output really urged me to check it out. 

After the first try on FX-10, I found out that the tool was outdated. The latest version was released in 2011 and I couldn't install it using NetCDF 4.4 library. I somehow managed to install it using the older NetCDF (version 3.6.3), but the tool failed to read WRF-CHEM outputs. Nevertheless, after switching from FX-10 to O-PACS, I would like to try installing it once again.

Before installing the tool, I tried to find out the cause of failure. The following were my hypothesis:
  • ARWPost program uses old NetCDF functions in the source code, hence failing the installation when used the latest NetCDF libraries.
  • ARWPost program could only read WRF-ARW instead of WRF-CHEM because the difference of the output fields between those two.
In order to prove those hypothesis, I re-installed WRF model once again, this time without CHEM add-on (original WRF-ARW), in another directory. After the model was successfully compiled, I tried to install ARWPost. For the first try, I used the original NetCDF (version 4.4), and it failed like last time. Then I installed NetCDF 3.6.3 and used the library for ARWPost installation. This time, it worked, the compilation was successful. So, ARWPost did indeed only worked with older NetCDF. While NetCDF supports backward compatibility, the reason why ARWPost could not be installed with older NetCDF functions still remained a mystery for me.

The next one was to find out whether ARWPost could read either WRF-ARW or WRF-CHEM output, or both of them. I run WRF-ARW first and read the output with ARWPost, and it worked flawlessly. Next, reading WRF-CHEM output.

!!!!!!!!!!!!!!!!
  ARWpost v3.1
!!!!!!!!!!!!!!!!

FOUND the following input files:
 ./testchem.nc

START PROCESSING DATA

   WARNING --- I do not recognize this data.
               OUTPUT FROM *             PROGRAM:WRF/CHEM V3.8.1 MODEL
               Will make an attempt to read it.


 Processing  time --- 2017-11-01_00:00:00
   Found the right date - continue
 WARNING: The following requested fields could not be found
            height
            pressure
            tk
            tc

 Processing  time --- 2017-11-01_01:00:00
   Found the right date - continue

 Processing  time --- 2017-11-01_02:00:00

   Found the right date - continue

...

Processing  time --- 2017-11-02_00:00:00
   Found the right date - continue

DONE Processing Data

CREATING .ctl file

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!  Successful completion of ARWpost  !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Surprisingly, It worked for both model output and the version difference of NetCDF had no effect at all. It sure showed some warning when it read the output, but overall, the post-processing was successful. Anyway, the best news is, it could read all chemical fields in the output file. This is some fields from the output CTL.

PM2_5_DRY     24  0  pm2.5 aerosol dry mass (ug m^-3)
PM10          24  0  pm10 dry mass (ug m^-3)
DMS_0          1  0  dms oceanic concentrations (nM/L)
PHOTR201      24  0  cl2 photolysis rate (min{-1})
PHOTR202      24  0  hocl photolysis rate (min{-1})
PHOTR203      24  0  fmcl photolysis rate (min{-1})
so2           24  0  SO2 mixing ratio (ppmv)
sulf          24  0  SULF mixing ratio (ppmv)
dms           24  0  DMS mixing ratio (ppmv)
msa           24  0  MSA mixing ratio (ppmv)
P25           24  0  other gocart primary pm25 (ug/kg-dryair)
BC1           24  0  Hydrophobic Black Carbon (ug/kg-dryair)
BC2           24  0  Hydrophilic Black Carbon (ug/kg-dryair)
OC1           24  0  Hydrophobic Black Carbon (ug/kg-dryair)
OC2           24  0  Hydrophilic Black Carbon (ug/kg-dryair)
DUST_1        24  0  dust size bin 1: 0.5um effective radius (ug/kg-dryair)
DUST_2        24  0  dust size bin 2: 1.4um effective radius (ug/kg-dryair)
DUST_3        24  0  dust size bin 3: 2.4um effective radius (ug/kg-dryair)
DUST_4        24  0  dust size bin 4: 4.5um effective radius (ug/kg-dryair)
DUST_5        24  0  dust size bin 5: 8.0um effective radius (ug/kg-dryair)
SEAS_1        24  0  sea-salt size bin 1: 0.3um effective radius (ug/kg-dryair)
SEAS_2        24  0  sea-salt size bin 2: 1.0um effective radius (ug/kg-dryair)
SEAS_3        24  0  sea-salt size bin 3: 3.2um effective radius (ug/kg-dryair)
SEAS_4        24  0  sea-salt size bin 4: 7.5um effective radius (ug/kg-dryair)
P10           24  0  other gocart primary pm10 (ug/kg-dryair)

That was really good since ARWPost could also interpolate sigma level from WRF output into pressure level. It will make my job much easier in the future. After opened it on GrADS:


The conclusion is, while it can only work with older NetCDF libraries (unless you modify the source code), ARWPost could read the model output, as long as it uses the same dynamic solver (ARW). Anyway, I still have no idea why it couldn't read WRF-CHEM output on FX-10.

    

Wednesday, January 17, 2018

Research Note #12 - Installing WRF-CHEM on Oakforest-PACS

This is basically the same with normal installation except for some modifications.

1. NetCDF Libraries
Because Oakforest-PACS (O-PACS) has already had NetCDF libraries (C, Fortran and C++), we don't need to install them. We just to have load the modules into the environment variables. 

$ module load netcdf
$ module load netcdf-fortran
$ module load netcdf-cxx

Since WRF installer needs all of them at once, it's better to copy the NetCDF files into a directory in parallel storage, then set the path to that directory and put it into bash_profile file. To copy the files, check the environment variables which contains entry "netcdf". Merge directories of all netcdf for each compiler into one, hence we'll get a single netcdf directory needed for WRF installer.

$ env | grep netcdf

$ cp <original netcdf dir> <new dir>
$ cp <original netcdf-fortran dir> <new dir>
$ cp <original netcdf-cxx dir> <new dir>

Let's say the new netcdf directory is at /work/gi55/c24223/libs/netcdf

export PACSLIB=/work/gi55/c24223/libs
export NETCDF=$PACSLIB/netcdf
export PATH=$NETCDF/bin:$PATH

2. MPI Library
O-PACS will load Intel MPI (impi) by default, thus we also don't need to install MPICH. Just set the directory of impi binaries (which contains mpicc, mpiexec etc) into environment variables and put it inside the bash_profile. The trick is basically same with NetCDF libraries, but this time, no need to copy the files because impi only has a single directory for all compiler.

$ env | grep impi

export mpi=/home/opt/local/cores/intel/impi/2018.1.163/bin64
export PATH=$mpi:$PATH

Now, reload the bash_profile script to get the changes.

3. Compiling WRF
It's better to compile WRF first, before installing any other libraries for WPS (zlib, libpng and JasPer). The main reason is, in order to install those WPS libraries, we should set up some environment variables which may mess up the WRF compilation (LD_LIBRARY_PATH, CPPFLAG and LDFLAGS).

$ export WRF_CHEM=1
$ export WRF_EM_CORE=1
$ export WRF_NMM_CORE=0
$ tar -xzvf WRFV3.8.1.TAR.gz
$ cd WRFV3/
tar -xzvf WRFV3-Chem-3.8.1.TAR.gz
$ ./clean -a
$ ./configure

Select the dmpar option with intel compiler Xeon AVX mods and basic nesting, then compile em_real.

18. (serial)  19. (smpar)  20. (dmpar)  21. (dm+sm)   INTEL (ifort/icc): Xeon (SNB with AVX mods)

$ ./compile em_real >& compile.log &

Make sure the compilation is successful.

==========================================================================
build started:   Tue Jan 16 13:26:53 JST 2018
build completed: Tue Jan 16 14:11:22 JST 2018

--->                  Executables successfully built                  <---

-rwxr-x--- 1 c24223 gi55 65532338 Jan 16 14:11 main/ndown.exe
-rwxr-x--- 1 c24223 gi55 65463925 Jan 16 14:11 main/real.exe
-rwxr-x--- 1 c24223 gi55 64466197 Jan 16 14:11 main/tc.exe
-rwxr-x--- 1 c24223 gi55 75598185 Jan 16 14:10 main/wrf.exe

==========================================================================

4. Zlib, Libpng and JasPer libraries
Because the libraries will be compiled and installed using intel compilers, we should set the environment variables.

$ export CC=icc
$ export FC=ifort
$ export CXX=icpc
$ export F77=ifort
$ export FCFLAGS=-axMIC-AVX512
$ export FFLAGS=-axMIC-AVX512

Install all libraries in $PACSLIB directory.

$ tar -xzvf zlib-1.2.11.tar.gz
$ tar -xzvf libpng-1.6.34.tar.gz
$ tar -xzvf jasper-1.900.1.tar.gz

The configuration and installation of each libraries are almost same. Before installing zlib, set environment variables for dynamic library linker and c pre-processor.

$ export LDLIB=-L$PACSLIB/grib2/lib
$ export CPPFLAGS=-I$PACSLIB/grib2/include

Then install each library with the following steps (zlib first, then libpng and JasPer):

$ ./configure --prefix=$PACSLIB/grib2 
$ make
$ make install

5. Compiling WPS
Before compiling WPS, exit and re-login the shell. It's needed to return the environment variables of LDLIB and CPPFLAGS back to their original states with the entries loaded by O-PACS.

Set environment variables for JasPer:

export JASPERLIB=$PACSLIB/grib2/lib
export JASPERINC=$PACSLIB/grib2/include

Then add grib2 lib directory as well as NetCDF lib directory into the existed LD_LIBRARY_PATH.

export LD_LIBRARY_PATH=/home/opt/local/cores/intel/impi/2018.1.163/intel64/lib: <many directories>:/work/gi55/c24223/libs/grib2/lib:/work/gi55/c24223/libs/netcdf/lib

Then configure using intel compiler with serial configuration.

$ tar -xzvf WPSV3.8.1.TAR.gz
$ cd WPS
$ ./clean -a
$ ./configure
$ ./compile

Check if the compilation successfully generates geogrid.exe, ungrib.exe and metgrid.exe.

lrwxrwxrwx 1 c24223 gi55 23 Jan 16 15:08 geogrid.exe -> geogrid/src/geogrid.exe
lrwxrwxrwx 1 c24223 gi55 23 Jan 16 15:09 metgrid.exe -> metgrid/src/metgrid.exe
lrwxrwxrwx 1 c24223 gi55 21 Jan 16 15:09 ungrib.exe -> ungrib/src/ungrib.exe

For the last measure, check the library dependency of ungrib.exe and the other two executables.

$ ldd ungrib.exe
 linux-vdso.so.1 =>  (0x00007fff239f0000)
        libpng16.so.16 => /work/gi55/c24223/libs/grib2/lib/libpng16.so.16 (0x00007f3555e64000)
        libz.so.1 => /work/gi55/c24223/libs/grib2/lib/libz.so.1 (0x00007f3555c45000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f3555943000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f3555727000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f3555366000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f3555150000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f3554f4c000)
        libimf.so => /home/opt/local/cores/intel/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64/libimf.so (0x00007f35549be000)
        libsvml.so => /home/opt/local/cores/intel/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64/libsvml.so (0x00007f355330b000)
        libirng.so => /home/opt/local/cores/intel/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64/libirng.so (0x00007f3552f97000)
        libintlc.so.5 => /home/opt/local/cores/intel/compilers_and_libraries_2018.1.163/linux/compiler/lib/intel64/libintlc.so.5 (0x00007f3552d2a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f35560b3000)

If all library found (no 'library not found'), then the compilation is successful.

Tuesday, January 16, 2018

Research Note #11 - Running WRF-CHEM on Oakforest-PACS

After so many compiling issues with WRF-CHEM installation on FX-10 since the beginning of this year, I started to change my attention to Oakforest-PACS (O-PACS) super computer. Just reading the specs, O-PACS definitely looks much more promising than FX-10: newer processors, (much) more cores and threads, more memory, twice as many nodes as FX-10 and the best of all; intel compilers! While not getting used as GCC, I had few experiences with intel compilers, especially ifort, and they're definitely way better than Fujitsu original compilers in term of compatibility with UNIX/Linux software development. 

Another unexpected thing which convinced me to switch to O-PACS was an email I received last friday. It said (in Japanese) that FX-10 service will be stopped after March 2018. Well, that's just like pouring salt on my food. I've done with FX-10. 

And that's it. In the early morning, I started using O-PACS and installing WRF-CHEM, and if it succeed, I would try to run it using parallel jobs with the compute node of the super computer. Guess what? Not only succeed installing the model, I also successfully run it with MPI and multi-nodes, and those were happened in just one day. While the installation itself was not perfectly smooth due to library linker issue and run-time stack problems, the model could run well afterwards. I will post the installation details soon.

There are several things I would like to highlight for running WRF-CHEM on compute node of O-PACS:
  • Number of processes per node affects the model processing time more than the number of nodes used. Using fewer nodes with more processes could significantly decrease the time consumption than using more nodes with fewer processes.
  • Number of nodes affects the consumption of token much. Using more nodes with fewer process was way more expensive than using fewer nodes with more processes.

Just take a look at the picture above. Job 1595130 (the third from bottom) was submitted to use 64 nodes and 8 processes per node, while job 1595143 (the bottom-most) was submitted to use (only) 16 nodes but has 32 processes per node. The total processes between the two jobs were same: 512 (64x8 and 16x32), the elapse times were almost same as well: around 10 minutes. How about the token consumed? Were they similar? Well ... not so much.

It's obvious that job 1595130 (more nodes, fewer processes) consumed token 4x more than job 1595143 (fewer nodes, more processes). Or maybe that's because I used different resource group?Well, we'll find out soon ...

Nevertheless, at least, starting from today, I can run the model using the full capabilities of one of the fastest super computers in the world. Yay !!!

Finally, I can concentrate more on the upcoming exam on Feb 7.

Time to sleep for more works tomorrow ... 

Wednesday, January 10, 2018

Research Note #10 - Installing NetCDF on FX10

Got a very good lesson today. I've just realized that the NetCDF package I installed for WRF on FX10 few months ago was in fact version 4.1.3 and not 4.4.1.1. I found that 4.1.3 package from WRF online tutorial web page contains more source codes, not only C but fortran and C++, where 4.4.1.1 downloaded  directly from NetCDF website only has either C or fortran codes. In order to install NetCDF libraries for both C and fortran from the latest packages, one should download and install the NetCDF-C first, then install NetCDF-fortran. It was quite confusing since NetCDF-C package's name doesn't contain "C" on it (just NetCDF).

The followings are the steps to install the latest NetCDF-C (4.4.1.1) and NetCDF-fortran (4.2) on FX10 Oakleaf super computer of University of Tokyo. The compilers used for these installations are cross-compilers of FX10 (frtpx, fccpx and FCCpx) on login node. While some resources I found on internet use quite complicated compiler options and optimizations, I used very basic one instead. So far, I found no problems for the compilations.  

1. Environment variables (bash)

$ export CC=fccpx
$ export CXX=FCCpx
$ export FC=frtpx
$ export F77=frtpx

2. Install NetCDF-C

Because it's cross-compiling, we need to add option "--host" for configure script. For FX10, the host name is "sparc64v-sparc-linux-gnu". For other machine, it should be "x86_64" (just run uname command to make it sure).

./configure --prefix=$INSTALL_DIR/netcdf --disable-dap --disable-netcdf-4 --disable-shared --host=sparc64v-sparc-linux-gnu
make
make install

Since it's cross-compilation, we cannot run "make check" on login mode of FX10. In order to check the installed binaries, we should use Interactive mode of FX10 to run them.

3. Install NetCDF-fortran

This is basically the same with its C counterpart. One of differences is the option for configure script where there are no "--disable-dap" and "--disable-netcdf-4" for fortran version. One important thing for this installation is, we should set CPPFLAGS and LDFLAGS on environment variables before running configure script, otherwise the configuration will be failed.

export CPPFLAGS=-I$INSTALL_DIR/netcdf/include
export LDFLAGS=-L$INSTALL_DIR/netcdf/lib

./configure --prefix=$INSTALL_DIR/netcdf --disable-shared --host=sparc64v-sparc-linux-gnu
make
make install
  
After  installation, check the contents of netcdf directory, especially /lib and /include, make sure all libraries are there. Also, don't forget to test "ncdump" on interactive mode to make sure it could be run well.

    

Thursday, December 28, 2017

Meteo #27 - Making Diurnal Variation Plot with GrADS

Making diurnal variation (DV) plot with GrADS is quite easy, especially if you have already known how to make Hovmoller diagram. The two plots are almost similar, with DV utilizes time-time axis instead of longitude-time or latitude-time pairs as Hovmoller. Furthermore, in order to make a DV with GrADS, one should first creates a binary file with modified X-Y grid which represents the time axis. A detail guide for making a GrADS gridded binary file could be read here

1. Creating GrADS binary file

A DV plot is basically a modified time series plot. If a time series runs along the full time period, then the DV divides the full time period (e.g. 1 month) into smaller 24-hour time periods hence giving the name 'diurnal' or daily plot. Since the data source is same, DV could use the same data set as time series, with the exception for the file writing method into binary file. 

'reinit'
'open aerosol.ctl'
'set fwrite aerosol_ts.dat'
'set gxout fwrite'
'set x 1'
'set y 1'
'set z 1'
'set t 1 744'
'd tloop(aave(dustload5,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(msa,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(dms,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(pm25,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(so2,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'disable fwrite'

The above script will make a 744-hour (1 month) time series data of 5 variables into a binary file. Notice that the script will save the data with sequential order on time as the X-grid in the binary file. In other words, we change the time dimension into space dimension (T-grid into X-grid). This file writing method is different from common time series plot data.

2. Making GrADS CTL file

The CTL file for DV plots is basically same with time series, with the only difference on the dimension definition. The main trick to make a DV plot is to modify 1-dimension data, in this case, X-grid with 744 grid numbers, into 2-dimension data: X and Y-grids, with 24 and 31 grid numbers respectively.

DSET ^aerosol_ts.dat
TITLE This is experimental
UNDEF 99999.0
XDEF 24 LINEAR 1 1
YDEF 31 LINEAR 1 1
ZDEF 1 LINEAR 1 1
TDEF 1 LINEAR 00Z01MAY2017 1mo
VARS 5
dustload5 0 99 Total dustload
msa 0 99 MSA
dms 0 99 DMS
pm25 0 99 PM2.5 Aerosol
so2 0 99 Sulphur Dioxide
ENDVARS

Notice that instead of full time period (744 hours or grids) which was written as X-grid in the binary file, the XDEF contains 24 grids (which represents 24-hour), YDEF contains 31 grids (which represents the day/date) and TDEF only contains 1 grid. This will make GrADS 'thinks' that the data file is a 2-D data (monthly) instead of 1-D data (hourly). Save the file with any name e.g. aerosol_ts.ctl.

3. Displaying the DV plot

Once the CTL file is saved, just open the data as usual. Since the plot uses X and Y-grids which are originally used for real-world coordinates, don't forget to set the map draw off. We can use contour, shaded-contour or filled-grid to display the plot.

ga-> open aerosol_ts.ctl
ga-> set mpdraw off
ga-> set gxout grfill
ga-> d dustload5 

This is the result looks like (filled-grid with contour):


Thursday, December 21, 2017

Meteo #26 - Saving Multi-variable Data into GrADS-gridded Binary File with GrADS

The title might be a little bit confusing, but what I want to share in this post is about how to save data, say, from a certain data format into another one which has GrADS format. For example, you open a NetCDF (nc) data with 5 variables in it with GrADS, and you want to save them into GrADS binary file for further analysis with the tool. 

Again, why use GrADS to do such task when you can do it by programming for example FORTRAN, C or others? Well, that's because it's efficient to do the task with GrADS only. You don't need to write a program, compile it or debug it over and over again, hence saving much of your precious time. Anyway, before doing so, one should understand the grids order in the GrADS-gridded binary file.

GrADS-gridded binary file is actually an ordinary binary file with a certain order. It doesn't have header/metadata to describe its dimensions or variables to the tool or user who wants to read it. That's why you need a descriptor or control file (CTL) in order to open the data with GrADS. In other words, you'll need to understand how the data is ordered in the file to differentiate dimensions and variables before trying to read it.

Imagine a 1-D data of a time series, for example, hourly air temperature (T) for 9 hours. The data will have 9 records which represent the time (hour), with their own values which represent the T at each time.

29, 30, 30, 31, 30, 29, 30, 31, 30

By looking at the data, we knew that the T for the 1st hour is 29, then 30 for the 2nd, and so on. That's exactly how the data stored in a binary file. It doesn't contain any information about the exact time (in real world), but we knew that the first record is the value of T at the 1st hour because it's already explained in previous paragraph that the data is a 1-D time series of T for 9 hours. If the data is just shown 'raw' as it is without any description, nobody will know what kind of information it contains since 29 or 30 could mean anything other than temperature (e.g. age, or number of apples on the tree, etc.). 

So, the key word here is 'description' about the data, which gives information to the user to interpret its contents. 

Then, how about if the description says that it's not a 1-D data, but rather a 2-D (spatial) data of air temperature at a time, for example 12 AM? Let say, the records are interpreted with gridded or matrix structure for real world coordinates like this (remember, it's still the same record as before):

30 31 30
31 30 29
29 30 30

Then we knew that the 1st three records in the data (29, 30 and 30) is located at the lowest row, while the first members of each three records (29, 31 and 30) are located at the leftmost column. If we give each record an x-y coordinate, the grid should be like this:

29 (y1,x1), 30 (y1,x2), 30 (y1,x3), ... , 30 (y3,x1), 31 (y3,x2), 30 (y3,x3)

It's clear from those two examples that a binary data file is merely a sequential blocks of data. 1-D, 2-D or even 5-D data will always be treated with sequential order by computer. What makes them different to each other is the 'description' which explain the 'rules' of the order sequence of data in the file. From previous example, we knew that, even the records is the same, it will have different interpretation based on the description about its contents. For 1-D data, all records is interpreted as values with 9 time stamps, while for 2-D data, the records follow matrix structure to give each values x-y coordinates with only 1 time stamp. To make it simple, the order is like this:

1-D data ---> Time, Value
2-D data ---> Time, y-value, x-value

Back to GrADS, the tool also has certain rules to treat gridded-binary data, and a user needs to follow such rules in order to make GrADS save or read data into its binary format. GrADS can save/read up to 5-D gridded data with the following order:

Ensemble, Time, Variable, z, y, x

If you want, for example, saving 2-D (e.g. 2x2 grid) data with 2 different variables for 2 hours to GrADS-gridded binary format. 

Variable 1 at hour 1:  A (y1,x1), B (y1,x2), C (y2,x1), D (y2,x2)
Variable 1 at hour 2:  E (y1,x1), F (y1,x2), G (y2,x1), H (y2,x2)

Variable 2 at hour 1: I (y1,x1), J (y1,x2), K (y2,x1), L (y2,x2)
Variable 2 at hour 2: M (y1,x1), N (y1,x2), O (y2,x1), P (y2,x2)

then you should save the data with this order: 

[Hour 1: Variable 1 : y1,x1,y1,x2,y2,x,1,y2,x2], [Hour 1: Variable 2 : y1,x1,y1,x2,y2,x,1,y2,x2], [Hour 2: Variable 1 : y1,x1,y1,x2,y2,x,1,y2,x2], [Hour 2: Variable 2 : y1,x1,y1,x2,y2,x,1,y2,x2]

As the result, the binary file contents will have order like this (with values):

A, B, C, D, I, J, K, L, E, F, G, H, M, N, O, P

It might be confusing at the beginning, but once you understand the pattern, everything will make sense and pretty easy to follow.

Here's an example GrADS script for saving time series (1-D, 744 hours) data of 5 variables into a binary file:

'reinit'
'open aerosol.ctl'
'set fwrite aerosol_ts.dat'
'set gxout fwrite'
'set x 1'
'set y 1'
'set z 1'
timer=1
while(timer<=744)
 say 'writing fields to file on t: ' timer
 'set t 'timer
 'd tloop(aave(dustload5,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(msa,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(dms,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(pm25,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(so2,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 timer=timer+1
endwhile
'disable fwrite'

Notice that all variable (fields) needs to be written into the file before moving to the next time stamp.

In order to open the binary file created from the script, you should follow the variable order again in the CTL file to make GrADS understand it. If you mess with the order, you still can read it by GrADS, but the results might be strange and confusing (e.g. dms may be interpreted as msa by GrADS). Here's the example CTL file to open the previously made binary file:

DSET ^aerosol_ts.dat
TITLE This is experimental
UNDEF 99999.0
XDEF 1 LINEAR 1 1
YDEF 1 LINEAR 1 1
ZDEF 1 LINEAR 1 1
TDEF 744 LINEAR 00Z01MAY2017 1hr
VARS 5
dustload5 0 99 Total dustload
msa 0 99 MSA
dms 0 99 DMS
pm25 0 99 PM2.5 Aerosol
so2 0 99 Sulphur Dioxide
ENDVARS

If you display the result in GrADS, it may look like this (e.g. variable dustload5, with few 'cosmetics' for display):