-
Ch4. Experimentation with jMetal최적화/jmetal 2021. 11. 17. 22:58반응형
- In our research work, when we want to assess the performance of a multi-objective metaheuristic, we usually compare it with other algorithms over a set of benchmark problems.
- After choosing the test suites and the quality indicators to apply, we carry out a number of independent runs of each experiments and after that we analyze the results.
1. Configure the algorithms by setting the parameter values in an associated Settings object
2. Configure the problems to solve.
(E..g, DTLZ problems are configued by default with three objectives)
3. Execute a number of independent runs per each par (algorithm, problem).
4. Analyze the results. jMetal can generate Latex tables and R scripts to present the results and to provide statistical information
4.1 The jmetal.experiments.Settings Class
- The motivation of designing this class has to do with the fact that in the traditional approach, a jMetal metaheuristic is executed through a main class, as NSGAII main in the case of NSGA-II
- This class contains the configuration of the algorithm so, if we want to run the metaheuristic with different parameter settings, we have to modify that file each time.
- alternative approach of defining the configuration of a metaheuristic
- Its state is represented by a Problem object (line 9), the problem name (line 10), and a string to store the file containing the true Pareto front of the problem if quality indicators are to be applied (line 11)
- The problem can be set either when creating the object (lines 22-24), either by using the method setProblem() (lines 51-52).
- The default settings are stablished in the configure() method (line 31). This method must be defined in the corresponding subclasses of Settings.
- The values of the parameters can be modified by using a Java HashMap object, passing it as an argument to second definition of the configure() method (line 36).
4.2 An example of Setting class: NSGA-II
- This is depicted in Listing 4.2, where the parameters to be set are declared in lines 9-14.
- The class constructor (lines 20-37), taking as argument the problem to be solved,
- creates an instance of the problem (lines 23-39)
- assigns the default parameter values (lines 30-36).
We impose the requirement of that the parameters have to the public and their name must end with the underscore (‘_’) character; the reason has to do with the mechanism to modify the settings, as is explained below
The implementation of the configure() method is included in Listing 4.3, where we can observe that it contains basically the same code used in the NSGAII main class to configure the algorithm.
- To modify specific parameters, we make use of a Java HashMap object. The map is composed of pairs (key, value), where the key and the value are strings. The idea is that the state variables defined in the subclass of Settings are used as keys in the properties object.
- Setting한 변수는 public이어야 해서 표시하기위해 identifer에 _붙임
As commented before, those variables must be public, and their identifiers must end with the underscore (‘_’) character.
- 셋팅한대로 해당 알고리즘 생성
- 셋팅 이후에 알고리즘 일부 파라미터 수
4.3 The jmetal.experiments.Main class
- If we take a look to this class (see Listing 4.4 the three ways to run the program (lines 15-17), where the only required argument is the algorithm name. This name must be the prefix of the corresponding settings class (e.g., NSGAII, GDE3, etc.).
- The second argument is the problem name (e.g., ZDT1, DTLZ3, etc.)
- the third one is the name of the file containing the Pareto front of the problem.
In case of indicating the three arguments, the program calculates and displays the value of a number of quality indicators (lines 45-58) that will be applied to the obtained front.
4.4 Experimentation Example: NSGAIIStudy
- jMetal includes the jmetal.experiments.Experiment class, which is intended to help in making experimentation studies of algorithms.
- In its current state, it allows to indicate:
the metaheuristics to run,
the problems to solve, the quality indicators to apply,
the number of independent runs to carry out.
As a result, it generates a directory with all the obtained approximation sets and quality indicators values and, depending on the user preferences:
In this section, we illustrate how to use this class by detailing the code of jmetal.experiments.studies.NSGAIIStudy, a subclass of Experiment aimed at studying the effect of varying the crossover probability in NSGA-II. In concrete, we want to study four probability values: 1.0, 0.9, 0.8, and 0.7.
4.4.1 Defining the experiment
experiment 객체 생성
ㅁ crossover probabilities 배열을 만들었다는 점
- In this example, as we are interested in 4 configurations of NSGA-II, with 4 different crossover probabilities, we define a Java HashMap object per algorithm (line 29) to indicate the desired values (lines 35-38).
- The code between lines 40-44 is used to incorporate the names of the Pareto front files if they are specified.
- Finally, the Algorithm objects are created and configure and they are ready to be executed (lines 46,47).
4.4.2 Running the experiments
To run the experiments, if we are using the command line we simply have to type (assuming the the CLASSPATH variable has been configurated):
Q. CLASSPATH의 역할은 구체적으로 무엇이지?
After the execution of the algorithms, we obtain the directory tree depicted in Figure 4.1. The directories are:
- data: Output of the algorithms.
- latex: Latex file containing result tables.
- R: R scripts for generating statistical information. // 왜 스크립트가 나오는거지 그러면 별도로 R을 설치해야 통계정보를 확인할 수 있나?
4.4.3 Analyzing the output results
1) data : the files with the variable values (files VAR.0, VAR.1, ...) and function values (files FUN.0, FUN.1, ...) of the obtained approximation sets (we show four files instead of the 30 files)
- As the FUN.XX files store the fronts of solutions computed by the algorithms, they can be plotted to observe the resulting approximation sets. Depending on the study you are interested in, you could also join all of them into a single file to obtain a reference set
2) quality indicators of these solution sets are included in the files HV, SPREAD, EPSILON, and IDG
3) The latex directory contains a Latex file with the name of the experiment, NSGAIIStudy.tex. You just need to compile the file with your favorite Latex tool. For example, you could simply type:
...
생략
4.5 Experimentation example: StandardStudy
In this section we describe another example of experiment. We have called it StandardStudy because it represents a kind of study we carry out frequently:
- comparing a number of different metaheuristics over the ZDT, DTLZ, and WFG benchmarks, making 100 independent runs
The algorithmSettings() method :
We test 5 metaheuristics: NSGAII, SPEA2, MOCell, SMPSO, and GDE3 (lines 24-28)
The main method is included below
- we can observe the algorithm name list (lines 42-43), the problem list (lines 44-48), and the list of the names of the files containing the Pareto fronts (lines 49-56):
The rest of the code is similar to NSGAIIStudy:
- the list of indicators is included in line 59, the directory to write the results and the one containing the Pareto fronts are specified next (lines 63-65),
- the number of independent runs is indicated in line 69,
- the experiment is initialized in line 71,
-the method to run the algorithm is invoked (lines 74-75):
- 이 전체 실험과정을 한 번 직접 핸드코딩 해보기
- BPP TSP적용
- 프로젝트 2개 어떻게 merge?
- 클라우드 심에 구현해서 시뮬레이션 결과 보기
'최적화 > jmetal' 카테고리의 다른 글
jmetal 튜토리얼 (3) Constraint handling (0) 2022.01.17 jmetal 튜토리얼 (2) Problems (0) 2022.01.17 jmetal 튜토리얼 (1) 설치 및 (2) Solution encodings (0) 2022.01.17 Chapter 3 (1) (0) 2021.11.18 jmetal 4.5 - Chapter 3 (1) 2021.11.15