SBART.Base_Models.RV_routine

SBART.Base_Models.RV_routine#

Classes

RV_routine

Base class for the all the RV extraction routines.

class RV_routine#

Bases: BASE

Base class for the all the RV extraction routines.

User parameters:

Parameter name

Mandatory

Default Value

Valid Values

Comment

uncertainty_prop_type

False

interpolation

interpolation / propagation

How to propagate uncertainties when interpolation the stellar template

order_removal_mode

False

per_subInstrument

per_subInstrument / global

How to combine the bad orders of the different sub-Instruments [1]

sigma_outliers_tolerance

False

6

Integer >= 0

Tolerance to flag pixels as outliers (when compared with the template)

min_block_size

False

50

Integer >= 0

If we have less than this number of consecutive valid pixels, reject that region

output_fmt

False

[2]

[3]

Control over the outputs that SBART will write to disk [4]

MEMORY_SAVE_MODE

False

False

boolean

Save RAM at the expense of more disk operations

CONTINUUM_FIT_POLY_DEGREE

False

1

Integer >= 0

Degree of the polynomial fit to the continuum.

CONTINUUM_FIT_TYPE

False

“paper”

“paper”

How to model the continuum

  • [1] The valid options represent:

    • per_subInstrument: each sub-Instrument is assumes to be independent, no ensurance that we are always using the same spectral orders

    • global: We compute a global set of bad orders which is applied for all sub-Instruments

  • [2] The default output format is: “BJD”,”RVc”,”RVc_ERR”,”SA”,”DRIFT”,”DRIFT_ERR”,”filename”,”frameIDs”,

  • [3] The valid options are:
    • BJD :

    • MJD :

    • RVc : RV corrected from SA and drift

    • RVc_ERR : RV uncertainty

    • OBJ : Object name

    • SA : SA correction value

    • DRIFT : Drift value

    • DRIFT_ERR : Drift uncertainty

    • full_path : Full path to S2D file

    • filename : Only the filename

    • frameIDs : Internal ID of the observation

  • [4] This User parameter is a list where the entries can be options specified in [3]. The list must start with

    a “time-related” key (BJD/MJD), followed by RVc and RVc_ERR.

Note: Also check the User parameters of the parent classes for further customization options of SBART

sampler_name = ''#
__init__(N_jobs, RV_configs, sampler, target, valid_samplers, extra_folders_needed=None)#
load_previous_RVoutputs()#
find_subInstruments_to_use(dataClass, check_metadata)#

Check to see which subInstruments should be used! By default only compare the previous MetaData (if it exists) with the current one

TODO: also check for the validity of stellar template in here!

Parameters:
  • dataClass ([type]) – [description]

  • storage_path (str) – [description]

  • check_metadata (bool) – [description]

Raises:

NoDataError – If all all sub-Instruments were rejected

Return type:

None

run_routine(dataClass, storage_path, orders_to_skip=(), store_data=True, check_metadata=False, store_cube_to_disk=True)#

Trigger the RV extraction for all sub-Instruments

Parameters:
  • check_metadata (bool) – If True, the TM check the Metadata if it already exists on disk (if it is the same: does nothing). By default False

  • store_data (bool) – If True, saves the data to disk. By default True

  • storage_path (Union[pathlib.Path, str]) – Path in which the outputs of the run will be stored

  • dataClass (DataClass) – [description]

  • orders_to_skip (Union[list,tuple,str,dict], optional) – Orders to skip for the RV calculation, in each subInstrument. If list/tuple remove for all subInstrument the same orders. If dict, the keys should be the subInstrument and the values a list to skip (if the key does not exist, assume that there are None to skip). If str, load a previous RV cube from disk and use the orders that the previous run used!. By default ()

Return type:

None

apply_routine_to_subInst(dataClass, subInst)#
Return type:

RV_cube

create_extra_plots(cube)#
Return type:

NoReturn

process_workers_output(empty_cube, worker_outputs)#
Return type:

RV_cube

build_target_configs()#

Create a dict with extra information to be passed inside the target functions, as a kwarg

Returns:

[description]

Return type:

dict

generate_worker_configs(dataClassProxy)#

Generate the dictionary that will be passed to the launching of the workers!

Parameters:

dataClassProxy

Return type:

Dict[str, Any]

launch_workers(dataClassProxy)#
Return type:

None

apply_orderskip_method()#

Computing the orders that will be rejected for each subInstrument

Return type:

None

complement_orders_to_skip(dataClass)#

Search for bad orders in the stellar template of all subInstruments.

Do not search the individual frames, as they might not be opened when we reach here

Parameters:

dataClass ([type]) – [description]

Return type:

None

process_orders_to_skip_from_user(to_skip)#

Evaluate the input orders to skip and put them in the proper format

Parameters:
  • dataClass ([type]) – DataClass

  • to_skip ([type]) – Orders to skip

Returns:

Keys will be the subinstruments, values will be a set with the orders to skip

Return type:

dict

Raises:

NotImplementedError – [description]

generate_valid_orders(subInst, dataClass)#
Return type:

list

trigger_data_storage(dataClassProxy, store_data=True)#
Return type:

NoReturn

property subInstruments_to_use#
close_multiprocessing()#
Return type:

None

close_shared_mem_arrays()#

Close any array that might exist in shared memory

Return type:

None

kill_workers()#
open_queues()#
Return type:

None

close_queues()#
Return type:

None

select_wavelength_regions(dataClass)#
launch_wavelength_selection(DataClassProxy)#

Currently not 100% implemented!

Parameters:

DataClassProxy (DataClass) –