Thursday, December 28, 2017

Meteo #27 - Making Diurnal Variation Plot with GrADS

Making diurnal variation (DV) plot with GrADS is quite easy, especially if you have already known how to make Hovmoller diagram. The two plots are almost similar, with DV utilizes time-time axis instead of longitude-time or latitude-time pairs as Hovmoller. Furthermore, in order to make a DV with GrADS, one should first creates a binary file with modified X-Y grid which represents the time axis. A detail guide for making a GrADS gridded binary file could be read here

1. Creating GrADS binary file

A DV plot is basically a modified time series plot. If a time series runs along the full time period, then the DV divides the full time period (e.g. 1 month) into smaller 24-hour time periods hence giving the name 'diurnal' or daily plot. Since the data source is same, DV could use the same data set as time series, with the exception for the file writing method into binary file. 

'reinit'
'open aerosol.ctl'
'set fwrite aerosol_ts.dat'
'set gxout fwrite'
'set x 1'
'set y 1'
'set z 1'
'set t 1 744'
'd tloop(aave(dustload5,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(msa,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(dms,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(pm25,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'd tloop(aave(so2,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
'disable fwrite'

The above script will make a 744-hour (1 month) time series data of 5 variables into a binary file. Notice that the script will save the data with sequential order on time as the X-grid in the binary file. In other words, we change the time dimension into space dimension (T-grid into X-grid). This file writing method is different from common time series plot data.

2. Making GrADS CTL file

The CTL file for DV plots is basically same with time series, with the only difference on the dimension definition. The main trick to make a DV plot is to modify 1-dimension data, in this case, X-grid with 744 grid numbers, into 2-dimension data: X and Y-grids, with 24 and 31 grid numbers respectively.

DSET ^aerosol_ts.dat
TITLE This is experimental
UNDEF 99999.0
XDEF 24 LINEAR 1 1
YDEF 31 LINEAR 1 1
ZDEF 1 LINEAR 1 1
TDEF 1 LINEAR 00Z01MAY2017 1mo
VARS 5
dustload5 0 99 Total dustload
msa 0 99 MSA
dms 0 99 DMS
pm25 0 99 PM2.5 Aerosol
so2 0 99 Sulphur Dioxide
ENDVARS

Notice that instead of full time period (744 hours or grids) which was written as X-grid in the binary file, the XDEF contains 24 grids (which represents 24-hour), YDEF contains 31 grids (which represents the day/date) and TDEF only contains 1 grid. This will make GrADS 'thinks' that the data file is a 2-D data (monthly) instead of 1-D data (hourly). Save the file with any name e.g. aerosol_ts.ctl.

3. Displaying the DV plot

Once the CTL file is saved, just open the data as usual. Since the plot uses X and Y-grids which are originally used for real-world coordinates, don't forget to set the map draw off. We can use contour, shaded-contour or filled-grid to display the plot.

ga-> open aerosol_ts.ctl
ga-> set mpdraw off
ga-> set gxout grfill
ga-> d dustload5 

This is the result looks like (filled-grid with contour):


Thursday, December 21, 2017

Meteo #26 - Saving Multi-variable Data into GrADS-gridded Binary File with GrADS

The title might be a little bit confusing, but what I want to share in this post is about how to save data, say, from a certain data format into another one which has GrADS format. For example, you open a NetCDF (nc) data with 5 variables in it with GrADS, and you want to save them into GrADS binary file for further analysis with the tool. 

Again, why use GrADS to do such task when you can do it by programming for example FORTRAN, C or others? Well, that's because it's efficient to do the task with GrADS only. You don't need to write a program, compile it or debug it over and over again, hence saving much of your precious time. Anyway, before doing so, one should understand the grids order in the GrADS-gridded binary file.

GrADS-gridded binary file is actually an ordinary binary file with a certain order. It doesn't have header/metadata to describe its dimensions or variables to the tool or user who wants to read it. That's why you need a descriptor or control file (CTL) in order to open the data with GrADS. In other words, you'll need to understand how the data is ordered in the file to differentiate dimensions and variables before trying to read it.

Imagine a 1-D data of a time series, for example, hourly air temperature (T) for 9 hours. The data will have 9 records which represent the time (hour), with their own values which represent the T at each time.

29, 30, 30, 31, 30, 29, 30, 31, 30

By looking at the data, we knew that the T for the 1st hour is 29, then 30 for the 2nd, and so on. That's exactly how the data stored in a binary file. It doesn't contain any information about the exact time (in real world), but we knew that the first record is the value of T at the 1st hour because it's already explained in previous paragraph that the data is a 1-D time series of T for 9 hours. If the data is just shown 'raw' as it is without any description, nobody will know what kind of information it contains since 29 or 30 could mean anything other than temperature (e.g. age, or number of apples on the tree, etc.). 

So, the key word here is 'description' about the data, which gives information to the user to interpret its contents. 

Then, how about if the description says that it's not a 1-D data, but rather a 2-D (spatial) data of air temperature at a time, for example 12 AM? Let say, the records are interpreted with gridded or matrix structure for real world coordinates like this (remember, it's still the same record as before):

30 31 30
31 30 29
29 30 30

Then we knew that the 1st three records in the data (29, 30 and 30) is located at the lowest row, while the first members of each three records (29, 31 and 30) are located at the leftmost column. If we give each record an x-y coordinate, the grid should be like this:

29 (y1,x1), 30 (y1,x2), 30 (y1,x3), ... , 30 (y3,x1), 31 (y3,x2), 30 (y3,x3)

It's clear from those two examples that a binary data file is merely a sequential blocks of data. 1-D, 2-D or even 5-D data will always be treated with sequential order by computer. What makes them different to each other is the 'description' which explain the 'rules' of the order sequence of data in the file. From previous example, we knew that, even the records is the same, it will have different interpretation based on the description about its contents. For 1-D data, all records is interpreted as values with 9 time stamps, while for 2-D data, the records follow matrix structure to give each values x-y coordinates with only 1 time stamp. To make it simple, the order is like this:

1-D data ---> Time, Value
2-D data ---> Time, y-value, x-value

Back to GrADS, the tool also has certain rules to treat gridded-binary data, and a user needs to follow such rules in order to make GrADS save or read data into its binary format. GrADS can save/read up to 5-D gridded data with the following order:

Ensemble, Time, Variable, z, y, x

If you want, for example, saving 2-D (e.g. 2x2 grid) data with 2 different variables for 2 hours to GrADS-gridded binary format. 

Variable 1 at hour 1:  A (y1,x1), B (y1,x2), C (y2,x1), D (y2,x2)
Variable 1 at hour 2:  E (y1,x1), F (y1,x2), G (y2,x1), H (y2,x2)

Variable 2 at hour 1: I (y1,x1), J (y1,x2), K (y2,x1), L (y2,x2)
Variable 2 at hour 2: M (y1,x1), N (y1,x2), O (y2,x1), P (y2,x2)

then you should save the data with this order: 

[Hour 1: Variable 1 : y1,x1,y1,x2,y2,x,1,y2,x2], [Hour 1: Variable 2 : y1,x1,y1,x2,y2,x,1,y2,x2], [Hour 2: Variable 1 : y1,x1,y1,x2,y2,x,1,y2,x2], [Hour 2: Variable 2 : y1,x1,y1,x2,y2,x,1,y2,x2]

As the result, the binary file contents will have order like this (with values):

A, B, C, D, I, J, K, L, E, F, G, H, M, N, O, P

It might be confusing at the beginning, but once you understand the pattern, everything will make sense and pretty easy to follow.

Here's an example GrADS script for saving time series (1-D, 744 hours) data of 5 variables into a binary file:

'reinit'
'open aerosol.ctl'
'set fwrite aerosol_ts.dat'
'set gxout fwrite'
'set x 1'
'set y 1'
'set z 1'
timer=1
while(timer<=744)
 say 'writing fields to file on t: ' timer
 'set t 'timer
 'd tloop(aave(dustload5,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(msa,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(dms,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(pm25,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 'd tloop(aave(so2,lon=76.2,lon=78.2,lat=27.6,lat=29.6))'
 timer=timer+1
endwhile
'disable fwrite'

Notice that all variable (fields) needs to be written into the file before moving to the next time stamp.

In order to open the binary file created from the script, you should follow the variable order again in the CTL file to make GrADS understand it. If you mess with the order, you still can read it by GrADS, but the results might be strange and confusing (e.g. dms may be interpreted as msa by GrADS). Here's the example CTL file to open the previously made binary file:

DSET ^aerosol_ts.dat
TITLE This is experimental
UNDEF 99999.0
XDEF 1 LINEAR 1 1
YDEF 1 LINEAR 1 1
ZDEF 1 LINEAR 1 1
TDEF 744 LINEAR 00Z01MAY2017 1hr
VARS 5
dustload5 0 99 Total dustload
msa 0 99 MSA
dms 0 99 DMS
pm25 0 99 PM2.5 Aerosol
so2 0 99 Sulphur Dioxide
ENDVARS

If you display the result in GrADS, it may look like this (e.g. variable dustload5, with few 'cosmetics' for display):


  

Wednesday, December 20, 2017

Meteo #25 - Opening NetCDF Data With GrADS Control File

NetCDF (or 'nc') is one of the most popular data formats in Geoscience universe, thus no wonder so many kinds of global dataset are distributed in nc format, including climate and meteorological data. While most of it could be easily opened and utilized by various data processing software, sometimes .. yes, sometimes s**t happens. This time, I would like to share how to open an nc data by using GrADS descriptor/control file (CTL).

Firstly, one may ask, why do you use GrADS to open an nc data? For more experienced people, the real question is probably: why the hell do you use GrADS CTL to open an nc??

Here are few reasons why use GrADS to open an nc:
  1. Built-in nc library. That means, users don't need to go through the troublesome NetCDF installation, just to open an nc data. What you need is just installing GrADS or OpenGrADS which is relatively much easier to do. Pre-installed nc binaries and libraries though, are recommended for some reasons which will be explain later in this post.
  2. Light, fast and free. This is an all-time classic reason. And yes, of course you can open an nc file using Matlab, ArcGIS or any sophisticated software but they are mostly resource-hogs and expensive.
  3. Efficient. Yup, you can also write a program (e.g. with FORTRAN) to open an nc file. However, using GrADS will save much of your time because you don't need to write, compile, or debug program as you do with programming languages.
Then why use GrADS CTL file to open an nc while you still can easily open it using GrADS 'sdfopen' or even 'xdfopen' commands?

The funny thing is, not all nc data could be open using above commands. You may sometimes encounter this annoying error message:

gadsdf: SDF file has no discernable X coordinate

It happens mostly because the nc data doesn't conform to the COARDS conventions (just Google, in case you've never heard of it). A 'good' nc data normally has a header or metadata section (hence called SDF or Self-Describing File) which contains complete information about the dataset, for example: dimensions (x,y,z,t), variables etc. GrADS 'sdfopen' or 'xdfopen' commands basically need to read that header in order to open an nc data, and when they found any incomplete information in a 'bad' nc data, that annoying message appears. While most of nc files follow COARDS conventions for storing the data, some of them may probably not, and that's exactly the reason why GrADS CTL file is very useful.

The catch is, instead of nc original header, the GrADS CTL file will be used as the 'replacement' header for opening the nc data. While it requires more efforts than just using 'sdfopen' or 'xdfopen', the user will have more control to the data by overriding the original descriptor.

Before opening an nc with GrADS CTL, a user needs to know two important information from the data:
  1. Dimensions. Since an nc data has gridded format, the user at least needs to know the grid numbers in x, y, z direction, as well as t (time) or e (ensemble), if they are needed. 
  2. Variable names and their order/structure. Since gridded data uses matrix-like sequential order to differentiate dimensions and variables, the user should know how the data is ordered in the file.
If you've already known those two information, that will be great. Otherwise, you'll need to use a tool in order to get information of nc's dimensions and variables. One of the most well-known tools to do such work is ncdump, which will be installed automatically when you install NetCDF on your system. That's why it's recommended to have pre-installed NetCDF binaries and libraries before you work with such data files.

Ok. Assume you don't know anything about the nc file you would like to open, here are the steps to open an nc data using GrADS CTL. For this example, I use an nc data which was an output from WRF-CHEM model simulation on Linux OS, with pre-installed NetCDF, and OpenGrADS ver 2.1.0.

1. Getting Header Information 

On the linux shell, make a symbolic link to the nc file you'd like to open. This is just to make things easier and not an obligatory though, thus you can omit this step if you want. For this example, I make a symbolic link of an nc file (wrfout_d01_2017-05-01_00:00:00) through a file named testnc, because the original file name is too long.

$ ln -sf wrfout_d01_2017-05-01_00:00:00 testnc

Execute ncdump to get the header of the nc file (or the link file).

$ ncdump -h testnc

Once the header is shown, scroll to the uppermost part of it. You may find something like this:

       dimensions:
        Time = UNLIMITED ; // (393 currently)
        DateStrLen = 19 ;
        west_east = 99 ;
        south_north = 109 ;
        bottom_top = 29 ;
        bottom_top_stag = 30 ;
        soil_layers_stag = 4 ;
        west_east_stag = 100 ;
        south_north_stag = 110 ;
        dust_erosion_dimension = 3 ;
        klevs_for_dust = 1 ;
        bio_emissions_dimension_stag = 41 ;
        klevs_for_dvel = 1 ;
        vprm_vgcls = 8 ;

Those are the dimensions we need to know. From the example, we found that the time dimension is 393, grid number in x direction is 99 (west_east), y direction is 109 (south_north) and z direction is 29 (bottom_top). The dimension header could be different for any nc file, thus you should at least know some information about the nc data before you open it. Otherwise, you could use your common sense to guess the dimensions, e.g. x usually related to west-east or longitude direction etc.

Next, scroll down the header to get information about the variables you would like to access. For this example, I would like to access a variable named BC2 (Hydrophilic Black Carbon). It may look like this:

         float BC2(Time, bottom_top, south_north, west_east) ;
                BC2:FieldType = 104 ;
                BC2:MemoryOrder = "XYZ" ;
                BC2:description = "Hydrophilic Black Carbon" ;
                BC2:units = "ug/kg-dryair" ;
                BC2:stagger = "" ;
                BC2:coordinates = "XLONG XLAT XTIME" ;

Pay attention to the top part of variable header. It shows the grid order of the variable: Time, bottom_top, south_north, west_east. In accordance with the dimensions, the grid order will be like this: t, z, y, x. You may want to make notes about each variables and their grid orders because you will need to put them into the GrADS CTL file later. Anyway, you don't need to list all variables in the nc data if you only want to access specific variables only (e.g. 5 out of 100 variables).

2. Making GrADS CTL File

Create a new CTL file with your favorite text editor (notepad, vi, vim, gedit etc.). The contents of CTL file to open an nc are almost no different with normal CTL files. For my example, it looks like this:

DSET ^testnc
TITLE This is experimental
DTYPE netcdf
UNDEF 99999.0
XDEF 99 LINEAR 1 1
YDEF 109 LINEAR 1 1
ZDEF 29 LINEAR 1 1
TDEF 393 LINEAR 00Z01MAY2017 1hr
VARS 5
DUSTLOAD_5=>dustload5 0 t,y,x Total dust loading
BC1=>bc1 29 t,z,y,x Hydrophobic Black Carbon (ug/kg-dryair)
BC2=>bc2 29 t,z,y,x Hydrophilic Black Carbon (ug/kg-dryair)
OC1=>oc1 29 t,z,y,x Hydrophobic Organic Carbon (ug/kg-dryair)
OC2=>oc2 29 t,z,y,x Hydrophilic Organic Carbon (ug/kg-dryair)
ENDVARS

DSET indicates the path of nc file (or its link) you would like to open. TITLE is just a title, you can write anything. DTYPE indicates the file format, you should put 'netcdf' for it. UNDEF indicates undefined value for each variable, if you don't know, just put -99.9e8 or any 'extreme' value as you like.

XDEF, YDEF, ZDEF and TDEF indicate the nc file dimensions you get from ncdump, as well as the first grid and its spacing in world coordinates. If you don't know the first grid or grid spacing of space dimensions (XDEF, YDEF and ZDEF), just put LINEAR 1 1 behind each entries. Otherwise, you should put the first grid coordinates and their spacing for each entries in world coordinates. For example:

XDEF 99 LINEAR 67.732000 0.311591836734694
   
For TDEF, you should define the first time stamp and time interval of each data. For my example, the first data is at 00 UTC of  01MAY2017, with 1 hour interval.

VARS indicates the number of variables you'd like to access in the data. In my example, I would like to access only 5 out of hundreds variables in the nc file.

Next entries, are the ones which make this nc CTL different from the normal GrADS CTL. You should list the variables you want to access with this syntax:

[VAR_NAME_IN_NC]=>[VAR_NAME_IN_GrADS] [NUMBER_OF_Z_LEVELS] [GRID_ORDER] [VAR_DESCRIPTION]

For example, I previously would like to access Hydrophilic Black Carbon in the nc data. You should firstly list its original name (which is BC2 in the nc data), then put '=>' sign before putting the variable name in GrADS, which could be any name you like (for my case, bc2). You can even put the same name for the GrADS variable name if you want.

Next, you should define the number of Z levels for the variable. Since BC2 is 4D variable data (with x,y,z, and t), the value should be 29 which is the same with Z dimension of the data. For other cases, 3D data for example (in this case, DUSTLOAD_5), which only has data with a single Z level, the value should be 0.

Finally, you should put the grid order as what you've found in the ncdump result (see first steps). For example, the grid order for variable BC2 from ncdump is: time, bottom_top, south_north, west_east. Then you should put it like this: t,z,y,x. Hence, in the end, it should be looked like this:

BC2=>bc2 29 t,z,y,x Hydrophilic Black Carbon (ug/kg-dryair)
DUSTLOAD_5=>dustload5 0 t,y,x Total dust loading

The rest (variable description) is free, you can write anything you like to describe the variable. Don't forget to put ENDVARS at the end of the file.

Save the CTL file with any name you like, for example: wrfout1.ctl.

3. Opening The NC Data

This is the last step and it's definitely no different with the normal way to open binary data with GrADS.

ga-> open wrfout1.ctl
Scanning description file:  wrfout1.ctl
Data file wrfout1 is open as file 1
LON set to 67.732 98.268
LAT set to 6.049 36.867
LEV set to 1 1
Time values set: 2017:5:1:0 2017:5:1:0
E set to 1 1
ga-> set t 100
Time values set: 2017:5:5:3 2017:5:5:3
ga-> d bc2
Contouring: 0.001 to 0.014 interval 0.001

And here's the result of my example (with x and y using real world coordinates):



From this point on, you can do anything with the nc file. You can add as many variable as you like in the CTL file or even save the variable data into a binary file for any further uses :-)

Wednesday, September 13, 2017

Research Note #8 - Running WRF Simulation (No Chem WPS)

To run WRF model (for this case, v.3.8.1), there are basically 3 main steps: Pre-processing/WPS (geogrid, ungrib, metgrib), WRF simulation (real, wrf) and post-processing (UPP, grib2ctl, gribmap). This post assumes that :
  • Only single/coarse domain created for this simulation.
  • Meteorological simulation only, no chem wps included.
  • ARW core used.
  • GrADS will be used to plot the output and UPP is used for post-processing (because ARWpost is .. ehmm .. troublesome) 
  • All utilities (UPP, wgrib, and gribmap) have been installed and paths are set
  • WRF history interval is 1 hour because UPP could only process output interval >= 1 hour (for now).
For more detail info:


WPS --> Prepare data for WRF
-----------------------------
1. GEOGRID --> Setting domain with geogrid
   - Edit namelist.wps
       &share  :    wrf_core (core model)
                         max_dom (coarse=1, nested > 1)
                      io_form_geogrid (output format, 2 for NC format)
      &geogrid:  e_we, e_sn (grid numbers in w-e and s-n components),
                        dx, dy (grid res in meters)
                     map_proj (projection type) --> low latitude --> mercator
                     ref_lat, ref_lon (center coordinates of domain)
                     truelat1 (true latitude) --> 30 for mercator
                     geog_data_path (path of geo data)
   - Run geogrid.exe --> make sure 'successful ..' message shown.
   - Output file --> geo_em_dxx.nc in NC format(xx --> domain number id)
 
2. UNGRIB --> Extracting meteorological fields from met data into intermediate data
   - Download met data files and put in a directory
   - Make symbolic link of Vtable --> e.g. GFS data --> ln -sf ungrib/Variable_Tables/Vtable.GFS Vtable
   - Run link_grib.csh to the met data --> ./link_grib.csh /home/data/gfs/gfs.t00z (put parts of filename, DONT PUT ONLY path there)
   - Edit namelist.wps
        &share  : start_date (start date of unpacking, not related to any domain hence only 1st column will be processed)
                         end_date (idem)
                         interval_seconds (frequency of data in seconds, hourly --> 3600, 3hourly --> 10800, 6hrly --> 21600 and so on)
         &ungrib : out_format (WPS)
   - Run ungrib.exe --> make sure 'successful ..' message shown.
   - Output files --> FILE:YYYY-MM-DD_HH
 
3. METGRIB --> Horizontally interpolates extracted met data onto model domain 
   - Edit namelist.wps
       &metgrid : io_form_metgrid (output format, 2 for NC format)
   - Run metgrid.exe --> make sure 'successful ..' message shown.
   - Output files --> met_em.dxx.YYYY-MM-DD_hh:mm:ss.nc

 
WRF --> Simulate/Run the Model
------------------------------
1. REAL --> Vertically interpolates met_em data (METGRIB's output), creates boundary and initial condition files and does some consistency checks.
   - CD to WRFVx/run
   - Make symbolic links to the met_em files
   - Edit namelist.input
       &time_control : run_* (simulation length, will override end dates if less than end dates)
                                  start_* (start time of simulation)
                                  end_* (end time of simulation, could be overridden by run_*)
                                 interval_seconds (data frequency, must be same with namelist.wps)
                                  history_interval (frequency to write data to wrfout file in minutes, e.g. 60 --> hourly)
                                  frames_per_outfile (how many time periods must be written in a single wrfout file, e.g. 1440 --> could contain 24 hourly history).
                             io_form* (should be 2 for NC)
   &domains      :      time_step (time step for model simulation/integration)
                                 max_dom, e_we, e_sn, dx, dy, grid_id (must be same with namelist.wps)
                                  num_metgrid_levels (number of vertical levels, data-dependent, should be 32 for GFS. Check met_em files with ncdump -h)
   - Run real.exe --> check rsl.out.0000 for the process --> tail -f rsl.out.0000
   - For multi-processor run --> MPI -np x ./real.exe (x = number of processor. e.g 4)
   - Output files --> wrfinput_dxx and wrfbdy_dxx
 
2. WRF --> Generates the model forecast
   - Run wrf.exe --> check rsl.out.0000 for the process --> tail -f rsl.out.0000
   - For multi-processor run --> MPI -np x ./wrf.exe (x = number of processor. e.g 4)
   - Output files --> wrfout_dxx_[initial time], one for each domain
 
 
UPP (Unipost Post Processing) --> Convert wrfout NC file into GRIB data
-------------------------------------------------------------------------------------------
1. RUN_UNIPOST_FRAMES --> Convert single wrfout NC with forecast time frames into GRIB data
   - CD to DOMAINS/postprd/ directory
   - Make symbolic link to wrfout data to DOMAINS/wrfprd/ directory
   - Edit run_unipost_frames
            TOP_DIR (top directory of WRFV3 and UPP)
           DOMAINPATH (domain path)
           WRF_PATH (WRF path)
           UNIPOST_HOME (UPP path)
           POSTEXEC, SCRIPTS (path of UPP binaries and scripts)
           modelDataPath (wrfout file/symbolic link directory --> domains/wrfprd/)
           dyncore (WRF solver, should be 'ARW')
           informat (wrfout format, should be 'netcdf')
           outformat (upp output format, should be 'grib')
           startdate (forecast initial time --> YYYYMMDDHH, should be same as namelist.input)
           fhr (forecast start hour, should be same as namelist.input)
           lastfhr (forecast end hour, should be same as namelist.input)
           incrementhr (history interval, should be same as namelist.input)
   - Run run_unipost_frames --> confirm the messages in case there are errors
   - Output files --> WRFPRS_dxx.hh, one for each history interval
 

GRIB2CTL --> Creates control file for the GRIB data (UPP output) to be read by GrADS
--------------------------------------------------------------------------------------
   - Run grib2ctl.pl script --> perl grib2ctl.pl -verf WRFPRS_d01.%f2 > test.ctl (for domain 01, each forecast hour into a file named test.ctl)
   - Output file --> test.ctl, make sure the t-def parameter in control file corrects with forecast history interval
 
 
GRIBMAP --> Creates GRIB index file to be read by GrADS
--------------------------------------------------------
   - Run gribmap --> gribmap -i test.ctl
   - Output file --> WRFPRS_dxx.00.idx (1 index file for all forecast history interval)

 Once the idx file created, you can open the data with GrADS.

--------------------------------------------------------------------------------------------------------

Some errors which could occurs:

1. WRF crash

The simulation abruptly stopped with messages such as below:

BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 139 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

Possible causes:

  • Time step is too large compared with the horizontal resolution. Recommended time step is 6*dx in km. For example: dx = 10000 (10 km), then time step should be 6*10 = 60s.

2. WRF freeze

The simulation abruptly stopped without any messages. Possible causes:
  • Memory consumption is too large. Use less processors and set bigger stack size. For example : ulimit -s 20000 or unlimited (not recommended for some cases) 

Thursday, August 24, 2017

Research Note #7 - WRF Chem Installation

Finally getting face to face with this model once more. Here comes summary for its installation. I had quite problems while installing it, but overall, it was an easy procedure.

Library Needed
--------------
1. NetCDF --> for WRF/WPS, NetCDF Fortran, version >=4.4
2. MPICH --> for multiprocessor
3. zlib --> for WPS
4. libpng --> for WPS
5. Jasper --> for WPS 

Download: http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php

Environment Variables
---------------------
1. Before Any/NetCDF Installation
   DIR=/home/ardhi/Build_WRF/LIBRARIES
   CC=gcc
   CXX=g++
   FC=gfortran
   FCFLAGS=-m64
   F77=gfortran
   FFLAGS=-m64
   
2. After NetCDF Installation
   PATH=$DIR/netcdf/bin:$PATH
   NETCDF=$DIR/netcdf
   
3. After MPICH Installation
   PATH=$DIR/mpich/bin:$PATH
   
4. Before zlib Installation
   LDFLAGS=-L$DIR/grib2/lib 
   CPPFLAGS=-I$DIR/grib2/include

Library Compatibility Tests --> http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php
----------------------------------------------------------------------------------------------------
   
Building WRF
------------
1. Without Chem
   setenv WRF_CHEM=0
   setenv J "-j 2"
   setenv LD_LIBRARY_PATH $DIR/mpich/lib64:$DIR/netcdf/lib64 (for 64bit)
   ./configure --> Serial (32)/ dmpar (34), nesting 0 or 1
   configure.wrf --> LIB_EXTERNAL --> -L/home/ardhi/Build_WRF/LIBRARIES/netcdf/lib64 -lnetcdff -lnetcdf (for 64bit)

2. With Chem
   Same WRF version with Chem (e.g. both V.3.8.1)
   setenv WRF_KPP 0
   setenv WRF_CHEM 1
   setenv EM_CORE 1
   setenv NMM_CORE 0
   setenv J "-j 2"
   setenv LD_LIBRARY_PATH $DIR/mpich/lib64:$DIR/netcdf/lib64 (for 64bit)
   ./configure --> Serial (32)/ dmpar (34), nesting 0 or 1 
   configure.wrf --> LIB_EXTERNAL --> -L/home/ardhi/Build_WRF/LIBRARIES/netcdf/lib64 -lnetcdff -lnetcdf (for 64bit)  

Building WPS
-----------
   setenv JASPERLIB $DIR/grib2/lib
   setenv JASPERINC $DIR/grib2/include
   ./configure --> Serial, GRIB2 (1) or dmpar, GRIB2 (3)
   configure.wps --> WRF_LIB -->  -L$(NETCDF)/lib64  -lnetcdff -lnetcdf (for 64bit)  


Some useful LINUX/UNIX commands during installation:

  • setenv (csh,tcsh), export (bash) : sets environment variables
  • setenv (csh,tcsh), env | more/less : shows environment variables
  • . ~/.bashrc (bash) : run configuration script/permanent environment variables 
  • echo $0 : shows the type of current shell 
  • lscpu : shows cpu, core and thread numbers (total cpu=socket x core x thread)
  • uname -r : shows kernel release version
  • uname -m, arch : shows whether linux is 32bit or 64bit
  • lsb_release -a : shows distribution release name


Thursday, June 22, 2017

Gaijin Story #10 - Income Tax Return dan Asuransi Kesehatan Nasional Jepang


Februari 2017 adalah salah satu bulan tersibuk selama saya tinggal di Jepang. Saat itu anak pertama lahir dan saya lumayan kelimpungan mengurus berkas kelahirannya, mulai dari rumah sakit, municipal office, kantor imigrasi sampai KBRI. Ujung-ujungnya, saya melupakan salah satu hal paling penting yang harus dilakukan setiap orang-terutama kepala keluarga-di Jepang setiap tahun: melaporkan pajak pendapatan alias income tax return ke municipal office. 

Laporan ini bisa dilakukan dengan mengisi form yang dikirimkan municipal office ke rumah via pos, atau datang langsung ke kantornya, dan biasanya deadlinenya adalah tanggal 15 Maret. Laporan pajak pendapatan ini wajib dilakukan setiap orang, terlepas apakah ybs memiliki pendapatan atau tidak. Mahasiswa yang hidupnya bergantung pada beasiswa seperti saya termasuk yang tidak memiliki pendapatan, tapi tetap diwajibkan untuk melapor. Tapi seberapa penting laporan pajak ini?

Informasi pendapatan yang ada di laporan tersebut akan menjadi dasar perhitungan berbagai subsidi yang akan diterima dari pemerintah. Salah satunya adalah asuransi kesehatan nasional (NHI/hokensho). Bagi yang tidak memiliki pendapatan (0 Yen income), akan mendapat diskon premi asuransi yang jumlahnya lumayan. Selain itu laporan ini juga dibutuhkan untuk memperoleh subsidi pengobatan anak dari pemerintah kota.  

Saya baru ngeh atas kealpaan melaporkan income tax ini ketika menerima slip pembayaran hokensho terbaru bulan Juni 2017. Tahun lalu, saya membayar sekitar 24,000 Yen untuk premi hokensho selama 10 bulan, untuk dua orang (saya dan istri). Alhasil, matapun terbelalak ketika menerima tagihan baru hokensho sebesar 120,000 Yen untuk 10 bulan. Itu artinya 12,000 Yen per bulan. Bagi mahasiswa MEXT yang beasiswanya pas-pasan seperti saya, tentu tagihan sebanyak itu bisa mencekik leher. 

Slip Premi Hokensho

Income Tax Return Form (Kashiwa-shi)
Kejadian lupa melaporkan income tax seperti ini lumrah terjadi di Jepang, terutama oleh foreigner, dan untungnya, pemerintah kota masih "berbaik hati" memberikan kesempatan kedua bagi orang-orang yang alpa tersebut. Setelah datang langsung ke municipal office dan melaporkan pendapatan ke bagian pajak, saya pun melapor ke bagian asuransi. Pihak asuransi nantinya akan menyesuaikan tagihan premi dan mengirimkan slip baru ke rumah. Berita bagus lainnya, tidak ada denda atau pungutan atas keteledoran saya. Alhamdulillah.

Pelajaran berharganya, kalo anda mahasiswa di Jepang dan tiba-tiba dapat tagihan premi asuransi membengkak, coba cek lagi, apakah income tax anda sudah dilaporkan. Intinya ya jangan sampai lupa lapor pajak setiap tahunnya (=