c3.optimizers.sensitivity
¶
Module for Sensitivity Analysis. This allows the sweeping of the goal function in a given range of parameters to ascertain whether the dataset being used is sensitive to changes in the parameters of interest
Module Contents¶
- class c3.optimizers.sensitivity.Sensitivity(sampling: str, batch_sizes: Dict[str, int], pmap: c3.parametermap.ParameterMap, datafiles: Dict[str, str], state_labels: Dict[str, List[Any]], sweep_map: List[List[Tuple[str]]], sweep_bounds: List[List[int]], algorithm: str, estimator: str = None, estimator_list: List[str] = None, dir_path: str = None, run_name: str = None, options={})[source]¶
Bases:
c3.optimizers.modellearning.ModelLearning
Class for Sensitivity Analysis, subclassed from Model Learning
- Parameters
sampling (str) – Name of the sampling method from library
batch_sizes (Dict[str, int]) – Number of points to select from the dataset
pmap (ParameterMap) – Model parameter map
datafiles (Dict[str, str]) – The datafiles for each of the learning datasets
state_labels (Dict[str, List[Any]]) – The labels for the excited states of the system
sweep_map (List[List[List[str]]]) – Map of variables to be swept in exp_opt_map format
sweep_bounds (List[List[int]]) – List of upper and lower bounds for each sweeping variable
algorithm (str) – Name of the sweeping algorithm from the library
estimator (str, optional) – Name of estimator method from library, by default None
estimator_list (List[str], optional) – List of different estimators to be used, by default None
dir_path (str, optional) – Path to save sensitivity logs, by default None
run_name (str, optional) – Name of the experiment run, by default None
options (dict, optional) – Options for the sweeping algorithm, by default {}
- Raises
NotImplementedError – When trying to set the estimator or estimator_list
- log_setup() None ¶
Create the folders to store data.
- Parameters
dir_path (str) – Filepath
run_name (str) – User specified name for the run
- read_data(datafiles: Dict[str, str]) None ¶
Open data files and read in experiment results.
- Parameters
datafiles (dict) – List of paths for files that contain learning data.
- select_from_data(batch_size) List[int] ¶
Select a subset of each dataset to compute the goal function on.
- Parameters
batch_size (int) – Number of points to select
- Returns
Indeces of the selected data points.
- Return type
list
- learn_model() None ¶
Performs the model learning by minimizing the figure of merit.
- confirm() None ¶
Compute the validation set, i.e. the value of the goal function on all points of the dataset that were not used for learning.
- goal_run(current_params: tensorflow.constant) tensorflow.float64 ¶
Evaluate the figure of merit for the current model parameters.
- Parameters
current_params (tf.Tensor) – Current model parameters
- Returns
Figure of merit
- Return type
tf.float64
- goal_run_with_grad(current_params)¶
Same as goal_run but with gradient. Very resource intensive. Unoptimized at the moment.
- replace_logdir(new_logdir)¶
Specify a new filepath to store the log.
- Parameters
new_logdir –
- set_created_by(config) None ¶
Store the config file location used to created this optimizer.
- load_best(init_point, extend_bounds=False) None ¶
Load a previous parameter point to start the optimization from. Legacy wrapper. Method moved to Parametermap.
- Parameters
init_point (str) – File location of the initial point
extend_bounds (bool) – Whether or not to allow the loaded optimal parameters’ bounds to be extended if they exceed those specified.
- start_log() None ¶
Initialize the log with current time.
- end_log() None ¶
Finish the log by recording current time and total runtime.
- log_best_unitary() None ¶
Save the best unitary in the log.
- log_parameters(params) None ¶
Log the current status. Write parameters to log. Update the current best parameters. Call plotting functions as set up.
- lookup_gradient(x)¶
Return the stored gradient for a given parameter set.
- Parameters
x (np.array) – Parameter set.
- Returns
Value of the gradient.
- Return type
np.array
- fct_to_min(input_parameters: Union[numpy.ndarray, tensorflow.constant]) Union[numpy.ndarray, tensorflow.constant] ¶
Wrapper for the goal function.
- Parameters
input_parameters ([np.array, tf.constant]) – Vector of parameters in the optimizer friendly way.
- Returns
Value of the goal function. Float if input is np.array else tf.constant
- Return type
[np.ndarray, tf.constant]
- fct_to_min_autograd(x)¶
Wrapper for the goal function, including evaluation and storage of the gradient.
- Parameters
- xnp.array
Vector of parameters in the optimizer friendly way.
- float
Value of the goal function.