Module Reference
This section contains details of the data control modules that are supplied with SuperSERVER.
Module Requirements
Many of the data control modules are designed to work together. The following table lists the combinations of modules that are required for common scenarios:
Count Perturbation | Count and Continuous | Weighted Dataset with Count Perturbation | Weighted Dataset with Count and Continuous Perturbation | Weighted Dataset | |
---|---|---|---|---|---|
Average Cell Weight | Required | Required | |||
Continuous Perturbation | Required | Required | |||
Output Scaling | Optional | Optional | Optional | Optional | Optional |
Perturbation | Required | Required | Required | Required | |
Perturbed Continuous RSE | Required if dataset has measures | Required | |||
Perturbed Count RSE | Required | Required | |||
Perturbed Estimates | Required if dataset has measures | Required | Required if dataset has measures | Required | |
Perturbed Mean Variance | Required | ||||
Perturbed Population Estimate Variance | Required if dataset has measures | Required | |||
RSE Annotation | Optional | Optional | Optional | ||
RSE Calculation | Runs Automatically | Runs Automatically | Runs Automatically | ||
RSE Poor Table Check | Optional | Optional | Optional | ||
RSE Suppression | Optional | Optional | Optional | ||
Sparsity Check | Optional | Optional | Optional | Optional | Optional |
Refer to the Example Configuration section for some examples of these scenarios.
Using Multiple Data Control Modules
Where multiple data control modules are required, they generally need to be run in a specific sequence, so that the results from one module can be used by another. To manage this, the Data Control API uses a priority system, which allows you to configure the sequence in which the modules are executed. The priority is a numeric value; modules with a lower value will be executed first.
When you require multiple data control modules to work together, you can choose either to create just one method and add all the individual module plugins that you need to that method, or you can create multiple methods and apply them individually to the dataset.
- If you are using one method, set the priority of each module when you add it to the method:
method <method_id> adddcplugin <plugin_id> <plugin_filename> <priority>
- If you are using multiple methods, set the priority of each method when you apply it to the dataset:
cat <dataset_id> addmethod <method_id> <priority>
method
comand and the cat
command for more details.
Refer to the individual module reference pages for details on their specific priority requirements.
Logging
By default, the Data Control modules will log limited information about the affect of each module. To increase the amount of logging output for specific modules, do the following:
- Open the SuperSERVER logging properties file in a text editor. If you installed to the default location this file is C:\ProgramData\STR\SuperSERVER SA\log4j.scsa.xml.
Add the following section before the closing
</log4j:configuration>
tag:XML<logger name="AUDIT_DataServer.Dcapi" additivity="false"> <level value ="DEBUG" /> <appender-ref ref="MainLog" /> <appender-ref ref="CONSOLE"/> </logger>
For each specific module that you want to log additional information, add the following property to your method:
CODEmethod <method_id> <plugin_id> addproperty EnableLogCube "true"
By default the log output will be written to the SuperSERVER log directory. If you wish to change the location of the log file for an individual module, add the following property:
CODEmethod <method_id> <plugin_id> addproperty LogCubePath <path>
For example:
CODEmethod MyPerturbationMethod PerturbedMeanVariance addproperty EnableLogCube "true" method MyPerturbationMethod PerturbedMeanVariance addproperty LogCubePath "C:\\dcapi\\logs"