Skip to content

GSOC 2021 Application RADIS Anand Kumar: Automatic Lineshape Engine

Anand Kumar edited this page May 5, 2021 · 2 revisions

PROJECT SUMMARY

This part describes the GSOC project as it was presented to me, as well as my understanding of the main goal of the project, so I can keep the focus on it throughout the whole summer of code.

Radis[1] is a fast line-by-line code used to synthesize high-resolution infrared molecular spectra and a post-processing library to analyze spectral lines. It can synthesize absorption and emission spectrum for multiple molecular species at both equilibrium and non-equilibrium conditions.

Radis computes every spectral line (absorption/emission) from the molecule considering the effect of parameters like Temperature, Pressure. Due to these parameters, we don't get a discrete line but rather a shape with a width.

Broadening is basically of two types that are homogeneous (all molecules have identical lineshape functions) and inhomogeneous (molecules have different lineshape functions). Broadening is majorly influenced by Collisional (Lorentzian profile, homogeneous) and Doppler broadening (Gaussian profile, inhomogeneous). The actual lineshape is a convolution of both the Lorentzian profile and Gaussian profile called the Voigt profile. The calculation of lineshape broadening is the bottleneck operation for any spectroscopy code. In the case of high temperatures, the spectra may contain millions of lines and thus computing the lineshape for each line can be expensive and can take up to multiple hours. Radis uses several optimizations to calculate the broadening to reduce the computation from hours to several seconds or minutes.

Radis has 2 methods to calculate the lineshape of lines.

  • Legacy Method
  • DLM Method

In the Legacy method, we use the simple approach of calculating the Voigt peak of all individual lines and add those together for all lines. To improve the performance, we use some optimizations like an efficient convolution algorithm, using a reduced convolution kernel, analytical approximations but when the number of spectral lines is in millions this method takes a lot of time to compute the broadening.

To overcome this drawback Radis came up with a unique approach called the DLM (Distributed Linewidth Map) method [2]. In this method, we consider that most of the lineshapes produced by the spectrum are similar. We create a 2D grid with axes comprising discrete Gaussian and Lorentzian widths and distributing spectral lines over this grid, allowing the use of FFT-accelerated convolutions. Thus we reduce the number of convolutions for a given number of lines. This method is the default optimization used in Radis and is very efficient for a large number of lines (in millions). But when the spectral range is wide and we have very low number of lines then the DLM method fails to produce a quick result. In fact, the legacy method outperforms the DLM method in this scenario.

We have an optimization parameter to select the method to use to calculate the spectrum. We have 3 methods to calculate the broadening -

None: The Legacy method is used to calculate the lineshape.
simple: DLM optimization is used where weights are equal to their relative position in the grid.
min-RMS: Similar to simple but weights are optimized by analytical minimization of the RMS error. (Default)

Performance-wise simple and min-RMS methods are quite similar but both of these perform very differently compared to the None method when the number of lines is less.

When we see the complexity of each method we see the whole broadening calculation complexity scales up to:

Legacy - broadening_cutoff * spectral_range / (wstep^2) * N_lines
DLM - spectral_range / wstep * log(spectral_range / wstep)

So the ratio of these two values should be a good indicator for choosing the lineshape engine.

The goal of this project is to derive an equation comprising all parameters that affect the performance for calculating Voigt broadening by running several benchmarks for different parameters involved in the calculation of lineshapes to check their significance in computation time. Then we need to find the critical value for the derived equation (Rc) which will tell us which optimization technique to select based on the computed R value in calc_spectrum().

PROJECT DESCRIPTION

In this part, I describe my current understanding of the RADIS code, what it does, what are the current bottlenecks, how the project can help improve them, and what’s required to achieve it, which eventually helped me write an appropriate Timeline.

RADIS ARCHITECTURE

Radis loads data from line databases like HITRAN(up to 700K), HITEMP(up to 2000K), CDSD-4000 (up to 4000K). As temperature increases, the number of lines also increases so the computation becomes expensive. The major bottleneck of the whole spectrum calculation is the line broadening calculation.
In broadening we basically loop over all lines, calculate the lineshape, and return the sum of the absorption coefficient (in equilibrium) or absorption and emission coefficient (in non-equilibrium). The broadening calculation takes place in the lbl module, broadening.py file. The parameters on which the performance relies are the optimization method and also the broadening method to some extent. The broadening method is of 3 types-

Voigt: Analytical approximation of Voigt Profile
Convolve: Integral convolution of Lorentzian and Gaussian Profile
FFT: Fast Fourier Transform, convolution is calculated in Fourier Space

If we see each block for the calculation of equilibrium and non-equilibrium spectrum, we will get to know about its effect on the performance of calculating broadening. The _calc_broadening() is the bottleneck step of the whole operation. If we compare it with other functions like _calc_broadening_HWHM() (calculates the broadening HWHM for all the lines), _calculate_pseudo_continuum() (finds weak lines and adds them in pseudo-continuum), all these processes don’t affect much in the performance of the whole broadening process, so all parameters associated to it can be ignored and we can simply go ahead with _calc_broadening() step.

The legacy method performs well when the number of lines is less, whereas the DLM method performs really well when the number of lines is really high (like millions). In the next section, we are going to talk about the complexity of both methods.

LEGACY VS DLM

LEGACY

In the legacy method (None optimization) for calculating the Voigt broadening, we compute convolutions of every line (number of lines = N_lines). Every line convolution is computed over a smaller grid called wbroad_center instead of the whole spectral grid. This grid is calculated on the basis of the broadening_max_width parameter. wbroad_center is an evenly spaced array from [-broadening_max_width/2 , +broadening_max_width/2] with wstep(spectral resolution) spacing. The length of this grid scales down to broadening_max_width / wstep.

In _calc_lineshape() we are calculating the lineshape for each line from HWHM. This includes calculating the Voigt broadening for each line which is the bottleneck step of the whole process and contributes to the complexity of the broadening. There are two broadening methods we are using, voigt and convolve. voigt uses analytical approximation and scales to ~broadening_max_width whereas convolve uses numerical convolution which scales to ~broadening_max_width^2, thus we stick to voigt broadening method. ‘voigt’ method uses whiting approximation which involves doing calculation with w_center matrix (N_lines x broadening_max_width / wstep) like multiplying the matrix by itself and then calculating the lineshape. This step has to be done over the entire spectral range which results in complexity -

spectral_range / wstep * O(w_center)


Even though some optimizations have been applied to reduce the computation like lineshape truncation i.e. convolution is done on the wbroad_center grid instead of the full spectral grid (also means loss of accuracy) but still, when the number of lines is really large the Legacy method struggles to produce quick results. If we look at a benchmark of CO of the Number of lines vs calculation time we get to know about the complexity of the Legacy method. Here is the benchmark LINK.

We can observe that when the number of lines increases, the computational time also increases and the time taken by the calculation of Voigt Broadening also increases exponentially whereas the rest of the functions don’t contribute much to the same. If we increase the number of lines to 1e6, then we start to face memory issues, so any computation after this point is difficult. But this table clearly indicates that the Voigt broadening using whiting approximation is the bottleneck step of the Legacy algorithm. Thus, the complexity of the Legacy method can be written as -

O(Legacy) = O(Voigt Broadening) = spectral_range / wstep * broadening_max_width / wstep * N_lines

DLM

In DLM (optimization= simple, min-RMS), the synthetic spectrum is calculated as the integral over the product of a Voigt profile and a 3D lineshape distribution function which is a function of spectral position, Gaussian and Lorentzian width coordinates. This method makes use of the fact that a spectrum can be synthesized by convolution of a stick spectrum as long as the lineshape is constant for all lines. Now if we breakdown each process in _calc_lineshape_DLM() -

  • _init_w_axis() - Initializes the LDM grid based on Gaussian and Lorentzian widths. This scales down to N_lines.
  • Calculate the lineshape using different broadening methods. If voigt then we use whiting approximation [3]. But as the length of axes of the LDM grid is small (NL ~16, NG ~4) so it doesn’t affect much. Similarly, for the fft method in which we calculate the convolutions in the Fourier space, the NL and NG are ignored. But if NL, NG isn’t that small or the number of lines is really large (in Millions) then it is better to have a general complexity, that scales up to Nv*log(Nv) (where Nv is the number of wavenumber-grid-points, Nv = spectral_range / wstep) multiplied by the size of the LDM grid (NG * NL).

So the complexity of this function can be scaled as ~

c1 * N_lines + c2 * Nv * log(Nv) * (NG * NL)

If we breakdown each process in _apply_lineshape_DLM() -

  • We project all the lines on the LDM grid using np.add() which basically scales up to N_lines.
  • Apply the lineshapes to the lines projected on the LDM grid. This scales up to Nv * log(Nv) * NG * NL

So the complexity of this function can be scaled as ~

c3 * N_lines + c4 * Nv * log(Nv) * (NG * NL)

When combining both the functions, the complexity scales up to -

O(DLM) = c * N_lines + c` * Nv * log(Nv) * (NG * NL)

R FORMULA

The main goal of the project is to determine when to switch from DLM to the Legacy method depending on some specific parameters. In the previous section, we studied the complexity of both algorithms. So the ratio of both complexity should give us a good idea. The current assumption is that Legacy scales to
[ spectral_range / wstep * broadening_max_width / wstep * N_lines ]
and DLM scales to
[ spectral_range / wstep * log(spectral_range / wstep) ]
So R = Broadening_max_width / (wstep * log(spectral_range / wstep)) * N_lines

But this R formula is calculated assuming that the complexity of both Legacy and DLM methods depends on spectral_range, wstep, broadening_max_width, and N_lines only. But things get interesting when we set up different simulated environments. For example, for the DLM method, if we check the correlation of N_lines and computational time, it seems like the computational time of DLM is independent of the number of lines but after a certain number of lines, we see that the computational time starts increasing too. A simple benchmark of N_lines( > 1e6) vs computational time for CH4 molecule using HITEMP database was done LINK

As the number of lines increases from 1e6, we can see the computational time increasing as well. The most significant change can be observed in the step where the convolution is being applied which scales up to Nv * log(Nv) * NG * NL.
So clearly we can see that N_lines and the Lorentzian(NL) and Gaussian(NG) axes length of the LDM grid plays an important role too.
Similarly, other parameters can play a significant role in determining the complexity of each method like the different broadening methods(voigt and fft) can result in different computation times for different N_lines.

GOAL

In order to understand the goal of the project let's look at a benchmark of OH molecule (20k lines), comparing its performance for both optimization None and simple methods for a specific condition (LINK).
If we compare the performance of both methods, starting from N_lines = 1 to a certain point (N_lines_critical) we can see the Legacy method is performing better than DLM. But after that point, we see that the DLM method starts performing better. So what we want to achieve is this point for every spectrum calculation i.e. if N_lines < N_lines_critical select None else select simple. But this is done assuming that spectral calculation is dependent on N_lines only which is not true. What we want to do is achieve this same thing via the help of the R formula where we want to include all the parameters that can contribute to determining the complexity of both methods, so we can get a rough estimate of this point in the form of RCRITICAL, compute R-value for every spectrum and choose the lineshape function accordingly.

APPROACH

The first goal of the project is to identify the important parameters that can significantly contribute to the complexity of both Legacy and DLM methods. We have already discussed the complexity of both methods in-depth and how it forms the R formula. Our first task will be to perform various benchmarks to assess the performance of both methods and see which parameter needs to be included in the complexity of either method. So in a benchmark, we will keep the temperature and pressure constant and vary other parameters to check their dependency on the complexity. As the benchmarks involve computation for different numbers of lines, less number of lines for Legacy (1e1 tot 1e5), and a large number of lines for DLM (>1e5) we will have to use different databases. For fewer N_lines we can use HITRAN databases for all supported molecules and for large N_lines we can HITEMP and CDSD-4000 databases for all supported molecules.

  • For the Legacy method, we proved that the complexity is determined by the Voigt broadening using whiting approximation. So in most of the cases, it will be dependent on the parameters that determine the complexity of Voigt Broadening (spectral_range, wstep and max_broadening_width, and N_lines). We will need to verify this by running benchmarks varying other non-dependent parameters like (Molecule, Isotope, spectral ranges of the same total number of points (varying wstep and spectral range)) and confirm that we get the same scaling value of Legacy. If not then we have to look deeper into the whole process of Legacy and other parameters that can be causing the issue. We will also have to check the scaling of non-equilibrium cases. We will run similar benchmarks for CO and CO2 to check the validity of the same.
  • For the DLM method, we saw that its complexity can vary depending on the number of lines. Parameters like axes width of the LDM matrix can also play a crucial factor in determining its complexity. So one of our first tasks will be to confirm the effect of these parameters.
    1. For N_lines > 1e5, we will test the computational time of every part in the broadening function. We will test the same for N_lines ( 1e5 + n*1e4, where n = 0, 1, 2, 3 ...) and check at what point does the computational time increases significantly and in which section. We will repeat this step for other molecules (equilibrium/non-equilibrium) and confirm that point exists. So automatically when the N_lines > N_Criticial_Point, we will include parameters that can be causing it (like N_lines, NL, NG), and thus the scaling will be updated. Note - As these computations take a lot of memory, after a particular point we won’t be able to run the benchmarks on my local system. We can switch to Google Colab from this point and upgrade its memory (to 32 GB) which will help us to get even more precise benchmarks.

    2. We will explicitly have to check how much NL, NG can play role in determining the complexity of the DLM method. We will update complexity to Nv * log(Nv) * NG * NL and run similar benchmarks to compare it against Nv * log(Nv) and check the difference. If the difference is significant we will update the scaling formula.

    3. We will check the complexity of the DLM when there are a very low number of lines (10 - 1e3) and see which function contributes to maximum computation time and update the complexity accordingly.

    4. We will check whether different broadening methods ('voigt' and 'fft') have different computational times, and see a trade-off between performance and accuracy.

As we need to find the critical value of R, we will need to store all spectral information for all the benchmarks we ran. Luckily we can store a spectrum in a .spec file format using store() and use a SpecDatabase to load all spectrums in a folder which will allow us to recompute the spectrum quickly (a simple implementation can be found at the bottom of this LINK).

After we identify the important parameters for different conditions, we will have different R formulas based on N_lines, NL, NG, spectral_range, wstep, broadening_max_width, etc. Some parameters like NL, NG aren’t exported in the conditions. So I will update the code in my fork to add this information in the spectrum.conditions so that this information can be exported and loaded by the SpecDatabase.

Now we know the R formula. Next thing is to calculate the Rcritical. One of the approaches we can use is an iterative approach. We can quickly load the previous benchmark cases from SpecDatabase and we simply compute the R-value for different parameters and check the computational time of both Legacy and DLM to check which method performed better for what value of R. We will modify the parameters such that the computational time for both methods becomes equal and check the corresponding value of R and assign it Rc. This can be repeated several times to check the validity and modification of Rc. One thing to note that this step can go hand in hand with the previous part. We can create plots for all the benchmarks for None and simple optimization. These benchmarks can be set as the following LINK. All these observations can be summed up in a Jupyter Notebook. The above tasks will be done on my fork of the develop branch and will result in First Evaluation.

FINAL EVALUATION

By this time we will know the R formula and R critical value. Thus our next task will be to add some of the benchmarks we did previously to check the performance difference between the two methods in the radis-benchmark. This will help us understand when to switch from DLM to the Legacy method for spectral computation depending on the parameters.

Our next task will be to add an ‘auto’ optimization mode in calc_spectrum() which will automatically select the optimum method for calculating spectrum based on the parameters fed. All necessary codes will be added to calc.py and factory.py

We will set up a default broadening_max_width value depending on the parameters fed for the Legacy method (default 10 cm-1). This will follow a similar approach i.e. benchmarking it against various broadening_max_width values to find the optimal value for performance and accuracy (Note - This part can be done under the first evaluation when we are doing benchmarks for the Legacy method). In the end, we will extensively re-run all the benchmarks to verify that the fastest method is always being selected.
During this time, I will focus on the secondary goal of the project i.e. setting up the code architecture to deal with large spectral regions. For molecules, we can have different spectral regions and each can have different R values, so all small spectrums will use the appropriate optimization method to compute them in the smallest time and then merge all the spectra so that we can compute the spectra with the best performance. In the next section, I will discuss the Timeline of the project and how work will be distributed throughout the summer.

TIMELINE

In this part, I broke down the main project objective into weekly tasks based on my estimation of the amount of work required to complete them. This will help me keep track and adjust the timeline as needed if some tasks end up being either faster or longer than initially planned.

COMMUNITY BONDING PERIOD

Week 1 (17 May - 23 May)

In this period I will mainly be focusing on getting acquainted with the Radis architecture and discuss the well-thought-out plan to proceed ahead with the project. I will go through the original Radis paper and also the DLM implementation paper because our project objective is based on these 2 implementations. Also, I will like to complete the following objectives during this period -

  • Engage with the community on RADIS Slack.
  • Training on emission & emission spectroscopy.
  • Have set up a development environment, be familiar with open-source tools (GitHub / Git /Tests) and Radis Architecture.

Week 2 (24 May - 30 May)

The complexity of both methods will be deeply analyzed and a benchmarking framework will be decided to check the important parameters that can contribute to determining the complexity of both methods. Also, the demo benchmarks (proof-of-work LINK) will be validated and the implementation will get refactored if needed.

Week 3 (31 May - 6 June)

Several benchmarks will be implemented for a single parameter (eg - N_lines in DLM) and will be tested extensively in various simulated conditions. The bottleneck step will be discovered and will be included in the scaling formula and tested against non-essential parameters to check its validity. The process will be evaluated and any changes suggested by mentors will be implemented. This will serve as a benchmarking base for the rest of the project.

Phase 1: Confirm and Adjust the Critical Number

Week 4-5 (7 June - 20 June)

Various parameters will be benchmarked for both DLM and Legacy (equilibrium and non-equilibrium conditions) and their overall complexity will be determined (depends on conditions provided). So for a particular condition, the R formula will be updated. Code will be modified to export all necessary parameters in the spectrum object, which can be loaded later using a SpecDatabase for further testing. We will find the dependent parameters and determining the R formula. Potential ideas and improvements to get Rcritical will be discussed with the mentors.

Week 6-7 (21 June - 4 July)

Benchmarks will be re-run and will be plotted against the R-value for both methods to compare their performance. By analyzing the plot, a reference point can be determined where Legacy outperforms DLM and will be assigned as Rcritical. For different R formulas, different Rcritical values will be determined. This process will be repeated several times to verify Rcritical.

Week 8 (5 July - 11 July)

A Jupyter Notebook will be created to discuss all the changes made to the R number formula, and the critical value obtained. This will also conclude my objectives for the first evaluation. This period will also act as a buffer week to ensure that the results are satisfactory from all the benchmarks. This will involve getting feedback from mentors and refactor the benchmarks if needed.

Week 9 (12 July - 18 July) - 1st Evaluation Deadline.

Mentors will evaluate the progress for the 1st Milestone. Also, I will focus on adding selective benchmarks to radis-benchmark that will help us to track the difference of performance between the two methods for different simulated conditions.

Phase 2

Week 10 (19 July - 25 July)

An ‘auto’ optimization mode in calc_spectrum() will be added which will automatically select the optimum method for calculating spectrum based on the parameters fed. All necessary codes will be added to calc.py and factory.py.

Week 11 (26 July - 1 August)

The optimum value of the broadening_max_width parameter will be determined for the Legacy method using a similar benchmarking approach(varying its value and comparing it against the computational time for the same set of parameters) and the default value will be updated. Also, I would like to spend this week setting up the code architecture to deal with spectras composed of multiple spectral regions and implement an optimization process for each sub spectra before merging them.

Week 12 (2 August - 8 August)

We will re-run the stored benchmarks and check that the fastest optimization method is always selected. This will involve manually checking that based on computational time which is the better method and compare the results to the ‘auto’ optimization and check whether the best method was selected or not. If some cases fail, the code will be refactored to align the results and the benchmark will be run again to verify the same.

Week 13 (9 August - 16 August) - Final Evaluation

I would like to have this week as a buffer period, to check that all the methods and codes have been implemented properly and the results are satisfactory. Also, I will complete any pending task (secondary goal in particular). The updated code will be pushed to the develop branch of Radis. This will solve Issue #5 of Radis-benchmark. This will also conclude my objectives for the final evaluation.

FUTURE DELIVERABLES

Even after concluding my GSoC project, I plan on contributing to Radis. Covering all aspects of the project if I fail to complete the project by the deadline, I would continue working on it after the GSoC period to get it completed, reviewed, and implemented.

I would also like to work on analyzing the simple and min-RMS optimization methods and run similar benchmarks to select the optimum method when using DLM. By selecting min-RMS, the discretization error becomes smaller at the cost of needing more calculation time for the weights. A smaller discretization error means we can afford to choose a coarser grid. For a small number of lines, the convolutions are the bottleneck so a coarser grid is advantageous. For a large number of lines, you want as few as possible calculations per line so the simple weights would be faster. I would like to verify this point and extend the optimization functionality to simple and min-RMS too.

CONTRIBUTIONS

I have successfully contributed to Radis with the following Pull Requests -

  1. https://github.com/radis/radis/pull/212 [MERGED] [FIXED ISSUE #167]

    Radis config file was handled by configparser. This Pull Request converted the config file to a JSON format by automatically converting the previous .radis file to radis.json and config.py was refactored to add/modify Databank Entries to radis.json. It helped me learn about the structure of Radis and how databases are linked to calculate spectrum.

  2. https://github.com/radis/radis/pull/216 [MERGED] [FIXED DOC ISSUE]

    This Pull Request fixed the images that weren’t rendered in Radis documentation. It was resolved by modifying the features.rst and lbl.rst files. I learned how the Radis documents are rendered on Radis.readthedocs.io.

Also, I have opened/reported the following issues -

  1. https://github.com/radis/radis/issues/77#issuecomment-808784198 [FIXED WITH PR #216]

  2. https://github.com/radis/radis/issues/208 [Closed]

Since my project involves running various benchmarks to check the complexity and performance of different optimization methods, so I did some benchmarks with different molecules for None and ‘simple’ optimization to check the bottleneck process and performance for different simulated conditions.

Following is the link to the repo - https://github.com/anandxkumar/Radis-Benchmark-Lineshape

ABOUT ME

PERSONAL DETAILS

Name - Anand Kumar
University - National Institute Of Technology, Hamirpur, India
Major - Computer Science and Engineering
Time-Zone - IST (UTC +05:30)
Email Address - anandkumar26sep00@gmail.com
Github - https://github.com/anandxkumar
LinkedIn - anand-kumar-83896717a
Contact Number - +91 931-599-2643
Portfolio- https://anandkumar.netlify.app/
CV- anand-kumar

PLATFORM DETAILS

OS - Ubuntu 20.04, Windows 10
RAM - 8GB
Processor - i5 8th GEN
Graphics Card - Nvidia GTX 1050ti
Connectivity - Broadband (300 Mbps)
Editor - Visual Studio Code, Anaconda (Spyder, Jupyter Notebook), Google Colab

EDUCATION

I am a third-year undergraduate student from National Institute of Technology, Hamirpur, India currently pursuing my Bachelors in Technology in Computer Science and Engineering with a CGPA of 9.38/10. I was introduced to coding back in school days when I had Computer Science as an optional subject in my curriculum. I started my coding journey with Java and MySQL and later learned C/C++. In college, I developed interests in Artificial Intelligence-related fields like Machine Learning, Deep Learning, and Data Science, and found Python as the suitable language for it. I have been practicing and building projects using Python for the last 3 years thus I’m well versed with it.

Recently I concluded my 2 months long Data Analytics internship at Pikkal and Co company, Singapore where I analyzed over 1 million+ podcast data to find the optimum schedule for releasing a podcast. My work can be found at the following link -

https://www.notion.so/3-Optimizing-Podcast-Cadence-e949b765feea4a4598f54fab6d6d611f

All analysis was done using Python language using libraries like Pandas, Numpy, Scikit-Learn, and Tableau (Visualization software).

I have also successfully completed a Deep Learning Internship at Mavoix Solutions Private Limited, Bangalore where I developed an Optical Character Recognition Model to read Blood Report.

INTEREST IN OPENASTRONOMY

I have always been an astronomy nerd. I will spend hours and hours watching astronomy-related videos. I was always curious about exoplanets and always wanted to know how one can study their atmosphere. This curiosity brought me to openAstronomy. I found Radis and was very interested in its working and how it helps in studying the atmosphere of exoplanets. This is why I chose Radis as my GSoC organization.

COMMITMENT

Currently, I don’t have any other commitments from May to July and also my college summer vacation coincides with the GSoC timeframe. So I will easily be able to dedicate 15+ hours every week and I will happily put more hours if needed to meet the expected deadline. Also, one thing to note is that my college will reopen in August. If my college opens in offline mode then I won’t be able to work in the morning time and can compensate for it during the night and weekends. But the current Covid scenario in India is getting worse day by day and it’s unlikely that college will reopen in offline mode. In that case, I will be able to continue working as in previous months.

REFERENCES

[1] RADIS: A nonequilibrium line-by-line radiative code for CO2 and HITRAN-like database species doi.org/10.1016/j.jqsrt.2018.09.027

[2] A discrete integral transform for rapid spectral synthesis, D. v.d. Bekerom & E. Pannier doi:10.1016/j.jqsrt.2020.107476

[3] E. Whiting, An empirical approximation to the Voigt profile, Journal of Quantitative Spectroscopy and Radiative Transfer 8 (6) (1968) 1379–1384, ISSN 00224073, doi:10.1016/0022-4073(68)90081-2.