Aurora /docs/user/
index

AURORA user documentation

AURORA – Archive Upload Research Objects for Retrieval and Alteration is a system for facilitating transport of science data generated in laboratories to a storage system and allowing sharing and moving of data to other locations for processing or further storing. It offers a rich and flexible way of attaching metadata to the stored datasets, including templating.

The system can be logged into from here:

https://www.aurora.it.ntnu.no

Key features

Getting to use it

Everyone having lab computers can start to use the archive system provided they meet the necessary criteria (network access, support for a needed transport protocol etc.). Both internal NTNU users and external users are allowed.

Here are guides on how to get to use the AURORA-system both for users and computers/ laboratories.

By a user

For NTNU-users the ability to login and start using the system can be achieved by the user himself. This is done by going to the archive systems web address:

https://www.aurora.it.ntnu.no

and selecting “FEIDE login”. Log on with your username and password at NTNU. The AURORA- system will then create a user account automatically based on the credentials from FEIDE.

For users external to NTNU or who do not have access to a NTNU-account, they need to have either the IT-service or a someone with permissions to create users for the department in question. For the IT-services to add external users, please send an email to orakel@ntnu.no and include their first name, last name and email address and state that it concerns adding a user to the AURORA-system.

There are also some laboratories that have such a restricted network access that they cannot even reach the NTNU internet resources (except for the AURORA-system). These computers are not able to login using the FEIDE-authentication. Then the user needs to use his email-address in the AURORA-system and its password. The password for the AURORA-system itself can differ from the FEIDE/NTNU-password.

The password for a user can be changed inside the web-interface for the AURORA-system. Be aware that this will only change the password for the AURORA-system itself, not the FEIDE-account.

When an account is generated by the IT-services or someone with the permissions a password will be automatically generated and sent to the user. The user can then change this password him- or herself.

Furthermore, the user needs to have permission to create datasets on a socalled research group (or research area group, all dependant upon how it is organized). This permission can be arranged by the administrators of the various labs. To view datasets that others have created on a research group, the user will need separate rights for this. This can also be arranged by the administrators of the labs.

By a laboratory computer

In order for a computer in a laboratory to be able to start using the AURORA-system and store its data there, it has to be registered by the IT-services. In addition we would need some information:

  1. What monthly or yearly storage need do you foresee that the lab computer needs? We underline that the AURORA-system is mainly meant at this point as a system for transferring data out of the laboratory and attaching metadata to it. Other systems must be used for long term storage.
  2. Who are going to be responsible for the lab computer (name, science group, department)?
  3. Where is the computer/laboratory located (campus, building, room)?
  4. What operating system is the computer running (Windows, Mac OS, Linux etc.)?
  5. What is the computers descriptive name (usually follows the instrument it controls, eg. “QCM”, “NMR400” etc.)?
  6. The hostname of the computer as used by the operating system.
  7. What is the absolute path on the computer where the top folder of data will be stored (eg. “c:files”, “d:” etc.)?
  8. What IP-address does the computer have?
  9. If the computer is to be protected from the internet/outside of the NTNU network, we need to know the internet outlet number that it is connected to (usually written on the wall or network outlet with a label).

Collect this information and send a request to the IT-services in order for us to add the lab computer to the AURORA-system.

Archiving datasets

There are two storage modes supported by the AURORA-system called automated acquire and manual acquire respectively. Automated acquire means that the dataset is retrieved after is has been created or finished. Manual acquire means that the data for the dataset is stored as it is being generated by the laboratory computer. Now, these two modes are not completely mutually exclusive as it is possible for a manual acquire-dataset to store its data several times over interrupted periods of time.

So the difference is that with an automated acquire-dataset the data is fetched in the background and after is has been generated. After the data has been fetched in the background the dataset is automatically closed by AURORA and no more data can be added.

With a manual acquire-dataset a storage area is opened in RW-mode when the dataset is created (typically available through Samba/CIFS). The user can then either at the same time as generating the data, store it on that storage area or he/she can copy data there in separate instances over time before manually closing the dataset. In any case the data will have to be put there in some way by the user (either manually or telling the software to store on a share to that area) and usually through eg. samba/CIFS. When the dataset is closed, no more data can be added to it, just as with the automated acquire-dataset.

After a dataset has been closed (both for automated acquire- and manual acquire- datasets) the distribution- phase of the data collection is executed. The data is then distributed to the locations specified in the policy for the computer in question. If no policy is defined, no distribution takes place. But the distribution phase enables the user or research group to automatically have the data distributed to other locations, either for storage and/or processing. Please note that if distribution happens or not, the data is still stored in the AURORA-system.

Automated dataset

Automated acquire transfer or copying in the background after the data has been generated is performed in the following way:

  1. Go to “Main Menu” and select “Create Dataset”.
  2. In the first window that appears, select the type of dataset (automated acquire), the group the dataset belong to and the computer that the dataset is created from. Then press “Submit”.
Create Dataset Options
Create Dataset Options
  1. In the next window select which folder that is going to be archived from the lab computer. It is recommended to use the “Browse”-button and its functionality to locate the folder or file that you want to archive. It will ensure that you get the correct path to the data with the minimum of hazzle. When the correct path has been found or entered, press “Select”.
Create Dataset Browse
Create Dataset Browse
  1. In the last window you will enter relevant metadata that needs to be set in order for the dataset to be created. The metadata that you enter will be checked for compliance with any metadata template and you will get feedback on which is in non-compliance and why. Make changes accordingly and press “submit” to create the dataset (or get another non-compliance feedback). We generally recommend that you fill in the “Description”-field as a minimum beyond what might be mandatory by the template. Mandatory fields will be shown with an asteriks "*" after the input field.
Create Dataset Metadata
Create Dataset Metadata
  1. Fill in the necessary information and hit “Submit”. If all the metadata was entered correctly you will get a message saying the dataset has been created:
Create Dataset Success
Create Dataset Success
  1. If you have non-compliance on any of the metadata, this will manifest itself as a red cross behind the input field. In addition a “Info…”-button will appear that will allow you to get information on the field that failed and what constraints are in effect for the field. This will allow you to make proper corrections.
Create Dataset Metadata Noncompliance
Create Dataset Metadata Noncompliance

Manual dataset

Manual acquire transfer is the copying of data while it is being generated. But it also allows for adding data over several separate instances in time and requires the user to manually close the dataset. It is done in the following manner:

  1. Go to “Main Menu” and select “Create Dataset”.
  2. In the first window that appears, select the type of dataset (manual acquire), the group the dataset belong to and the computer that the dataset is created from. Then press “Submit”.
Create Dataset Options
Create Dataset Options
  1. In the last window you will enter relevant metadata that needs to be set in order for the dataset to be created. The metadata that you enter will be checked for compliance with any metadata template and you will get feedback on which is in non-compliance and why. Make changes accordingly and press “submit” to create the dataset (or get another non-compliance feedback). We generally recommend that you fill in the “Description”-field as a minimum beyond what might be mandatory by the template.
Create Dataset Metadata
Create Dataset Metadata

If all the metadata was entered correctly you will get a message saying the dataset has been created.

  1. If you have non-compliance on any of the metadata, this will manifest itself as a red cross behind the input field. In addition a “Info…”-button will appear that will allow you to get information on the field that failed and what constraints are in effect for the field. This will allow you to make proper corrections.
Create Dataset Metadata Noncompliance
Create Dataset Metadata Noncompliance

The user will now have a dataset that is created and in an open state. It can be accessed through any protocol that the AURORA-system offers for its storage area, but typically one would use Samba/CIFS (see chapter on the FileInterface for more information).

When you are finished transferring data to the manual acquire dataset, you need to close it by doing the following:

  1. Go to “Main Menu” and select “Manage Dataset”.
  2. Locate the dataset in question from the list that appears and select “Modify” and then the choice “Close…”.
  3. You will then be asked if it is OK to close the dataset? Select “Close Dataset” and you will get a message that says the dataset has been closed.
Manage Dataset Close
Manage Dataset Close

Change user password

If the user has a need to change his or her password (in any of the available login methods, except FEIDE or any other external authentication authority) this is done by first logging into the AURORA-system and then doing the following:

  1. Go to “Main Menu” and select “Change Authentication”.
Change Authentication Select
Change Authentication Select
  1. In the first window that appears select the authentication-type in question. Typically you should just select AuroraID, which is the native authentication type of the AURORA-system. Press “Select”.
  2. In the last window you will type the new password two times. Follow any instructions on the screen. When finished press “Change”.
Change Authentication Authstr
Change Authentication Authstr

A message will appear confirming any password change.

Please note that the AURORA-system allows authentication to happen through several methods and that the password that are changed in this procedure is only for the main, internal Aurora credentials called AuroraID (if not any other was selected in the first window above).

For authentications methods that are trusted by AURORA and that happens through other services, like FEIDE, you have to change the password for that in the way prescribed by those systems. They will not affect the credentials for the other authentication methods.

Manage datasets

To manage datasets that have been created one goes into the main menu-choice called “Manage Datasets”.

Here the user will see a list over the datasets that one has access to (to view, read data, read metadata etc.), moderated by potential search criterias.

Manage Datasets View
Manage Datasets View

Data columns

All entries in the “Manage Dataset” have a separate column for “Dataset ID”. This is the unique identifier for a dataset in question and are used everywhere in the AURORA-system to identify it and it is even used by the file interface (such as on the Samba/CIFS-share).

Please also note that what choices are available in the drop-down menus in the “Manage Datasets” view (modify, retrieve or view) are entirely dependant upon what phase the dataset is in (open, closed, acquiring etc.) and what permissions the user has. To get more permissions than what you might already have - talk to the laboratory administrators or research group admin.

The status-column show icons for the state of the dataset. The dataset is either in an open (orange, open padlock) or closed (green, closed and crossed out padlock) state. Furthermore the data of the dataset is either present locally in AURORA (green storage symbol) or not present locally in AURORA (red, crossed out storage symbol). By being present locally means that the data is stored in any of AURORAs storage areas. If they are not present locally, they might still exist somewhere else outside of AURORA.

Manage Datasets Status Icons
Manage Datasets Status Icons

Datetime format

The datetimes in the AURORA Web-client is shown in ISO-8601 format and with any timezone information that the client is able to get from the browser. If no timezone information is available or the timezone is 0 (effectively no offset), the time is displayed in UTC and postfixed with the letter “Z” for Zulu or UTC.

The ISO-8601 specification will show the time in the following way (all examples are the same datetime):

2020-05-17T08:00:00Z
2020-05-17T09:00:00+01:00
2020-05-17T10:30:00+02:30

Here the year is first (2020), followed by the month (05) and ending in the day (17). Then a T-notifier comes signifying that the time comes, and the time is specified in normal 24-hour notation with hour:minute:second. The whole datetime is postfixed with the timezone information available (as required by the ISO-8601 specification).

The AURORA REST-server exlusively works in UTC-/unix-datetime, so any conversion going in and out of the server is done by the web-client. So any datetime coming from the REST-server has its timezone added or subtracted to the unix datetime before being converted and shown as ISO-time strings.

How to change expire date

This option is dependant upon the user having the DATASET_DELETE-permission on the dataset in question and that the new expire date he seeks to set is allowed within the current lifespan policy in effect in the entity tree (see the How to set dataset lifespan for more information on lifespans).

There are two stages at which an expire date can be set and those stages relates directly to the phases of a dataset:

There is a separate policy for changing expire-dates when the dataset has the status OPEN as compared to when it has the status CLOSED.

For those users fortunate enough to have the DATASET_EXTEND_UNLIMITED-permission on the dataset in question can extend beyond any restrictions imposed by the lifespan-policies.

In order to change the expire date of a dataset, do the following:

  1. Go to the “Manage Datasets” view in the web-client and find the row with the dataset in question.
  2. Go to the column “Modify” and left-click the three horizontal lines and select “Expire Date…”.
  3. Change expire date according to your wishes and respecting the lifespan-policies and left-click the “Change”-button
Change Expire Date
Change Expire Date

The expire date can be specified in several forms:

You will receive a feedback after clicking the change-button if it was ok or not and what the new date is.

How to close dataset

This option is dependant upon the user having the DATASET_CLOSE-permission on the dataset in question.

In order to close a dataset, do the following:

  1. Go to the row that contains the dataset you want to close.
  2. Go to the columns called “Modify” and click its three horizontal lines.
  3. Select “Close…”
  4. A new window will appear asking for confirmation of the close dataset operation.
  5. Click the “Close Dataset”-button to close the dataset. A confirmation window will appear.
Close Dataset
Close Dataset

How to edit dataset metadata

This option is dependant upon the user having the DATASET_CHANGE permission on the dataset in question.

In order to edit a datasets metadata, do the following:

  1. Go to the row that contains the dataset in question and then hit the three horizontal lines in the column called “Modify” and select “Metadata…”.
  2. A new window will appear with the ability to edit the metadata. Do necessary changes and hit “Submit”.
Edit dataset metadata
Edit dataset metadata

Please note that mandatory keys (must be filled in) in the metadata template for the given key are shown with an asterix ("*“) behind them. Also, all keys may have a”+" and “-” button behind them. This is to remove the key completely or to add or remove extra values on a given key. All metadata keys in AURORA are basically multi-value (what is termed arrays), but usually we will only use one value. How many values a key accepts can also be moderated by the templates, so it is not necessarily up to the user what he wants to do here.

It is also important to remember that when you choose to edit metadata, the metadata keys and their values that you see are not only the ones you saved, but also the ones that comes from any template(s) in effect. If you want to see which of the metadata are actually saved, you need to choose to just read what metadata has been saved (see How to read metadata). Then no templates in effect will be added.

Editing metadata will be possible as long as the data of the dataset exists and the dataset has not been removed. When the dataset has been removed, the metadata will no longer be allowed to change.

You can also add new keys to the metadata that are not defined by the template(s) in effect. You can do that by:

  1. Writing a new metadata namespace location in the input box labeled “Add metadata key” and click “Add”… or
  2. You can select a preset from the same input box by left-clicking once to get focus on the input box (and then for some browsers left-clicking once more) to get a dropdown menu of dataset presets.
Metadata add key
Metadata add key

Allowable characters in a metadata key name is: a-z, A-Z, 0-9, “.” (period) and “-” (hyphen). Please be careful with adding metadata keys outside of template, as it is desirable that users add data on the same metadata keys to ensure sensible searchability. We suggest that if you have long-term needs for certain metadata you should notify your lab administrator about this so he can ask the system operator for this to added in template, ensuring correct metadata namespace allocation as well as use.

This said, you can customize your own metadata keys by manually adding them. Also remember that for normal metadata namespace, all keys have to start with “.”, so eg. “.ThisAndThat”, “.MyKey” and so on and so forth. The reason for this is that everything before “.” is system metadata and not accessible for users in a normal way and should usually not be addressed. You will be allowed to add system metadata keys in the metadata editing window, but they will be ignored and removed when updating the dataset through the AURORA REST-server.

For more in-depth information on how metadata and metadata keys work, please consult the chapter called How to edit template.

How to edit dataset permissions

In order to edit dataset permissions do the following:

  1. Go to the row in question in the “Manage Datasets”-view that have the dataset you want to edit permissions on.
  2. Go to the column called “Modify” and in the drop-down menu select “Permissions..”.

The rest of how to edit the permissions themselves is performed in the same manner as for other entities and this can be found in the following chapter How to edit permissions

How to read metadata

In order to read the metadata of the dataset (read-only), do the following:

  1. Go to the row of the dataset in question and then hit the three horizontal lines in the column called “View” and select “Metadata”. A new window appear.

All the metadata will be protected and the user unable to change its values.

How to remove a dataset

This option requires that the user has the DATASET_DELETE-permission on the dataset in question.

Performing this operation will not immediately remove the dataset if it is closed, but instead start a voting notification-process where the users have to vote if it is ok to close the dataset. If no voting happens and the notification-process escalates up to the top of the entity tree the maintenance-service will cancel the removal process if the expire-date has not yet been reached.

If the dataset is an open dataset, the removal of the dataset will be immediate without any voting process.

Please note that even if it the dataset is removed, it is not actually deleted, but move away from the user and kept for a certain time before being actually deleted.

In order to remove a dataset, do the following:

  1. Go to the “Manage Datasets”-view and find the row with the dataset in question.
  2. Go to the Modify-column and left-click the three horizontal lines and select “Remove..”.
  3. In the remove dataset view, review the information and click the “Remove Dataset”-button if you agree to start the removal process.
Remove Dataset
Remove Dataset

The manage datasets view has a rich ability to search for datasets based on metadata values. In order to open the search dialog with options for searching, click the “+”-button on right hand side of the search-button. Then several options will be available to you:

Search Dialog
Search Dialog

The first option “Search Condition” decides if the search keys you use and enter values for (we will come to this momentarily) will require all of them to be true in the search (ALL, which is default) or if it is enough that any of them is true (ANY) in order to match it. This corresponds to the logical operators “AND” and “OR”.

“Results per page” is the number of datasets to show per page in your search result. You can change this by entering a new number and hitting the ENTER key.

Now, in order to use search criteria and not just search for all datasets that one have access to, one needs to add search keys. Search keys are basically keys in the namespace of the metadata (see How to edit metadata.

You can add keys to your search by either using the “Add search key preset” or through “Add user-defined search key”. The difference between the two is obviously that the former has presets available that the user can select, while the latter requires the user to write the namespace key that he wishes to search with manually. Please also be aware that in general you cannot search with keys above the “.”-something namespace, because it is system-metadata. There are some exceptions to this, as can be views when selecting preset search keys, where is included some system-metadata keys.

To add a preset search key, select one in the dropdown menu by right-clicking it and the page will reload and add that key to the search area.

Search Key Preset
Search Key Preset

When the key has been added you are free to write values on it and then hit the search-button.

Search Key Field
Search Key Field

In order to do more complex searches, one needs to add several keys to the search area. The procedure to add you own keys are the same as the preset-example, but instead you have to enter the entire metadata namespace location. If you were to add the same as the description-preset you would enter “.Description” in the “Add user-defined search key”-field and hit the button “Add”.

When having two or more search keys, the “Search condition”-setting comes into play (ALL/ANY).

Search Key Fields
Search Key Fields

In addition, one can see that each search key has two drop-down menus. The leftmost one is the “transformation filter” dropdown and basically tells the web-client if you want your value for this key to be transformed in any way from what you write. This is handy in the case of metadata keys that are numerical unix datetime field. For most values the “As-is” choice is the correct one. If you select search keys from the presets, the web-client will set the filter that is most appropriate for that key.

Search Key Filter DropDown
Search Key Filter DropDown

By selecting “Unix DateTime” in the dropdown, you basically tell the web-client to transform your value into a valid unix datetime value. The format of the “Unix DateTime”-format is the same as for ISO-datetimes, and the following format: YYYY-MM-DDT08:00:00Z. Now, you can skip the “T” and the “Z” and use space instead since they are ignored in any event. You can also skip the “:”-characters and instead use space. It is also possible to write partial datetimes. If you only write hour, the minute and second will be filled in with 00.

Examples of valid datetime fields:

2020-01-01
2020-01-01 08
2020-01-01 06:00
2020-01-01 06 00
2020 01 01 06 00 00
2020-01-01T11:00:00Z

The “Byte Size” filter is when you want to search for metadata keys that contain byte size values. This filter will then allow you to specify that you want to search a given key for any value eg. “>” and “100M” which means 100MB (both M and MB are permissible). The filter will convert the value 100M to 100 x 10^6 to give you the byte value for a 100MB before passing it on to the search-method. If you want to write just raw byte values you can either use the filter called “As-is” or just write the byte values without unit post-fix. Valid units for this filter is: K or KB, M or MB, G or GB, T or TB, P or PB and E or EB.

As for the other dropdown menu for each search key is the comparator to use with the value. Supported comparators are:

The “=” equals comparator is the default if none is specified.

Search Key Comparator DropDown
Search Key Comparator DropDown

If you also want to match removed datasets in your search, you tick the “Include removed datasets (metadata only)” box in the search options. As the text says, it can only give you the dataset and its metadata, but the data will be removed.

Also remember when searching metadata keys with datetime information in them and you want to match a specific date, you need to either use the comparator “>=” or define your search as containing two criterias that limit the bounds of the dates you want to include (see example below)

Searching DateTime keys
Searching DateTime keys

The reason for this is that all system-recorded dates (such as create-, expire-, closed- dates) are exact down to the microsecond. This means that to match something you either have to get it correct down to a microsecond or use other comparators like suggested above.

How to vote and stay sane

Certain notifications or actions in AURORA will prompt a voting process by the Notification-service. As of writing this happens when a dataset has expired for closing or for removal. It will also happen if a user asks for the dataset to be removed.

A voting process is a mechanism whereby the users vote for a certain action to take place, in this case removal or closing of a dataset. When users receive a notice about such a voting process, the notice (eg. an email) will contain voting information as well as a link to cast your vote(s). If enough votes are cast by various users the notified action will be executed.

The reasoning for this process is that we do not want certain actions to happen in AURORA without proper approval, especially not removal. Using a voting system also allows for a certain flexibility in how a research group specifies what votes their users have. The number of votes required to perform one of the actions are globally defined by the AURORA system. The only thing the research group can do is to specify the vote distribution among their users (some might not have any votes at all).

If no voting is performed in the event of a voting process, the notification will in time be escalated to the next level in the AURORA entity tree and whoever has votes on this level can also vote or take action to stop the process. Escalation will continue until the issue is resolved or it reaches the top of the entity tree, whereby the following will happen:

As mentioned earlier the voting process involves escalations up the AURORA entity tree if no one votes or takes actions to stop the notification. The escalation process works as follows:

  1. A notification is created on the level of the entity that the notification is about (most typically a dataset).
  2. The Notification-service checks for what users or votes exists on the level that it is at in the entity tree. If the level happens to be a dataset, only the creating user of that dataset is sent a notice about the voting process. The creating user is automatically given 1 vote, which is not enough to remove a dataset.
  3. If nothing is done to vote within a grace period, not enough votes have been achieved or any action has been taken to cancel the voting-process (such as extending the expire-date of a dataset), the notification is escalated to the next level in the tree.
  4. The voting-process then repeats from point 2 above and down to 4.
  5. When enough votes have been cast, it executes the action in question (such as dataset removal) or if it reaches the top of the entity tree without enough votes, it depends on what notification it is (see above).

When a notice is sent to user(s) it will in the case of a voting-process contain a message that includes the voting/acknowledgement information:

This message requires voting in order for it to be accepted. We need you to vote on it by clicking on the link below (you have  (you are not to enter any information on this page):

https://www.aurora.it.ntnu.no?redirect=./ack.cgi%3Fid%3DhmBIJvaagMeuTiu4grrYZYsh1PeTrJ9W%26rid%3DpMWQd0KeyzBzXmxB00OvnrEpjtAKIc8o

When you choose to vote to execute whatever this message is about, you will be sent to the confirmation page of AURORA. There you will be asked to login if you are not already logged in and then you will be asked if you wish to confirm or not?

Confirm Notice voting
Confirm Notice voting

When you confirm the web-client should return with a message saying that it was successful or not.

Notice voting success
Notice voting success

We would like to underline that the AURORA voting process is completely performed by digital ballots and we do not foresee creating support for any physical notices whereby a mail-man will knock on your door and ask for physical ballotting. Reaching the required votes in a voting process is completely by popular vote. No voting-processes are rigged, faked or otherwise stolen. Except, we guess, that the research group administrator might be construed as having a heavy leaning towards giving certain members more votes than others or when AURORA keep the current dataset because of lame duck events and no or enough voting has taken place. This we call enlightened rule.

How to view the dataset log

This option requires that the user has the DATASET_LOG_READ-permission on the dataset in question.

There are many reasons to want to read the dataset log, but some of the most common will be to check on the progress of the Store-service.

In order to view the dataset log, do the following:

  1. Go to the row of the dataset in question and then hit the three horizontal lines in the column called “View” and select “Log”. A new window will appear with log details relevant for the given dataset in question.
  2. View and check whatever you want to in the log and change the loglevel if you want to more information from the AURORA-system, like selecting the “DEBUG” loglevel. Please also note to update log-information that is being received you can opt to regularly hit the “Submit”-button.
  3. When finished reading the dataset log, close the window or hit “Return to datasets” or any other relevant option.

The loglevels go from DEBUG all the way up to FATAL, where DEBUG gives you the most detailed number of logged entries, while FATAL will give you the least (if any). As default the log view is set to INFORMATION-level log entries.

How to access data

File-interface

The primary way to access the data of AURORA datasets is the socalled file interface.

Mounting (in windows)

Access on linux

The Aurora fileinterface is available on login.ansatt.ntnu.no at /fagit/Aurora/view/access/user/.

You may mount the fileinterface on your local computer similar to the description in Mounting (in windows).

Folder structure

In the top folder of the connected drive you will find

Use the Create_dataset.html to create new datasets.

Structure of a dataset

A dataset folder will contain

The “data” folder is where the actual data resides. You may change the the content of this folder in the usual ways for open datasets. Open datasets is dataset created as “synchronous” and not yet closed.

Footnotes

  1. The .html extension may be hidden by your operating system, but the .html files could still be recognized by a web browser icon.

  2. The selection set mechanism is not yet implemented. When implemented you should be able to tailor the selection folders as well as the top folder.

Web-client interfaces

There are several ways of accessing the data of a dataset. AURORA defines a concept called “Interface” that gives you various ways to access your data.

To see which interfaces that are available for your dataset:

  1. Go to “Manage Datasets” from the main menu and the find the dataset in question.
  2. Go to the “Retrieve”-column of the dataset and select the interface that you would like to use. Examples of available interfaces are: zip-archives, tar-archives and samba/cifs-shares.
Manage Dataset Interface Select
Manage Dataset Interface Select
  1. After you have selected the folders you want, hit the button “Render” and the interface will be rendered.

Sometimes the rendering might take some time, especially in cases like with zip- and tar-sets. It is advisable to keep the render-tab open in the browser and refresh it once in a while until it is has a ready rendering. The reason for this is that the information that you get back is closely tied to which folders you select to render and are dependent upon you selecting the same folders in order to see the rendered interface if you go back later.

When an interface is finished being rendered information about how to get the data will be displayed on the screen.

Manage Datasets Interface Render Result
Manage Datasets Interface Render Result

Please also note that all of your datasets will always be accesible through various standardized protocols such as Samba/CIFS (please see the File-Interface chapter of this document).

Some interfaces will not be available to users outside of NTNU. An example of this are the samba-shares that require an NTNU-account/username for authentication.

Remote control a computer

AURORA includes functionality to open up a tunnel from a user-computer, to a lab-computer. This for controlling experiments in the lab from a remote locations. The remote control protocols supported, can be defined flexibly in the AURORA templates, for any given computer. Common protcols are, RDP and VNC.

This functionality relies on a gatekeeper-service located on another server and that are utilized by AURORA.

To use this capability you need to have the COMPUTER_REMOTE-permission on the computer. If you do not have that permission, please go cuddle your laboratory administrator for permission.

Remote Control Computer
Remote Control Computer

If you wish to open a tunnel for a computer you have, you can do the following:

  1. Go to the main menu of the AURORA web-client and select “Remote Control”.
  2. In the remote control window select the lab computer you wish to remote control.
  3. Next, select the protocol that you wish to open to the lab computer.
  4. Press the “Remote Control”-button.
Remote Control Result
Remote Control Result

A new window will appear with the information to utilize the new tunnel. Please note that you have to start using the tunnel within 10 minutes or it will close. After you have started to use it, it will stay open until you close your remote control software or the software looses its connection. At that point, one would need to open a new remote control tunnel to continue working.

The remote control tunnel is only valid for the computer you opened the tunnel from, as stated in the “Your client address” field in the image above.

In addition, for RDP and VNC connections one can click the link for the new tunnel and download a auto-generated shortcut file for either RDP or VNC.

Manage the entity tree

AURORA has the ability to define and mange an entity tree of the various parts of the system such as USER (for users of the system), GROUP (organizational/logical and roles), TEMPLATE (for defaults and enforcement of settings in the entity tree), STORE (for various ways of transporting data), COMPUTER (for devices that generates data) and INTERFACE (for ways of accessing the data in the datasets).

The AURORA Web-client has a entity tree view where all of these entities can be managed. They can be created, moved, deleted, assigned permissions, edited metadata and so on dependant upon your assigned permissions, the restrictions inherit in the entity type and any possible templates.

Manage Entity Tree
Manage Entity Tree

Introduction to concepts

These are some introductory concepts about how to work with the AURORA entity tree. It is especially relevant to those that have management roles in Aurora.

It should also be noted that the research group-, roles- and lab- setup described here in the entity tree is only a best practice recommendation that we are using at NTNU. AURORA itself it totally general and dynamic and one can easily create other structures and setup. But it is easy to get lost in the vast abyss of groups and permissions if one doesn’t know what one is doing. It is also a point to avoid too much granulation, which again increases complexity and potential confusion.

Structure overview

The basic element in Aurora is a called an entity. It has a type, a parent and a set of metadata.

Entity types

There exists several entity types in AURORA, but those that are most relevant to user- and management roles are:

AURORA might also be confusing in the sense that it doesn’t differentiate between Group-entities that are pure organizational/structural entities in the tree and those that are socalled role-groups. To AURORA all of them are just groups and it is how you use them in the tree that makes the difference. This flexibility is also why we recommend as best practice to make the setup as easy as possible.

Entities are tied together with three different relations:

Relations

The parenthood forms a single noncyclic tree, but as a special case the root node is its own parent. Only groups can be parents, other entity types will be leaf nodes.

Membership

Membership associates an entity to another. It is directed, so assigning A as member of B does not make B member of A, but B may also be assigned to A explisitely. A child is considered a implist member of it self and its parent.

Roles

An entity has a set of roles. The roles is the cascading set of its memberships. Consequently A’s roles include A, all its ancestors, B and its ancestors, and all other memberships the any of them may have etc. Consquently all entities is member of the root.

Caveat: Assigning root as a member of another group will give the group role and any derived role to all entities. Care should be taken when assigning any higher level group as a member.

Permissions

A set of permissions may be granted for an entity (subject) on another entity (object). Permissions will also be inherited from the objects parents, unless explisitly denied.

The complete set of a subjects permissions on a object is the union of direct or inherited permissions for all of the subjects roles.

Management tasks

The Aurora “basic goal” is to move datasets from a lab computer to a research group where it can be made available for the group members or others who need access to it. The management tasks is thus mainly to give the users the neccesery permissions for the operation.

The primary interface for management is the entity tree view. This is found by logging into the Aurora web page and select “Manage entity tree” from the main menu. The tasks is based on how NTNU currently has organized the structure, and may change if we manage to integrate Aurora with other systems and services, such as BookItLab (but, then this documentation will also change).

The tree references is given like “/NTNU/[ORGFAC]/[ORGDEP]/Labs/TEM-Lab” where the “/” separates the different nodes. The leading “/” is short for ROOT/. The elements inside “[]” is placeholders for the relevant entity, such as Faculty, Department, NTNU etc.

Left of the entity name there is a menu icon consisting of three lines. By clicking this you can select what to do with this entity.

Some definitions
Users

Users are automatically created when they log into Aurora with FEIDE. This is the recommended way of creating users. They may however be created manually as a local user. Local users will then be matched with a FEIDE login if the email address matches.

Users is located in /ORG/users/email like “/NTNU/users/bard.tesaker@ntnu.no”. To create a local user select “Create User…” on /ORG/user/.

Roles

Roles are generally located in:

[ORG]/[ORGFAC]/[ORGDEP]/Labs/[ORGLAB]/roles
[ORG]/[ORGFAC]/[ORGDEP]/Research Groups/[GROUP]/roles

depending on whether the role is in relation to the computers of a lab or a research group.

Create a research group
Create the group

Create the group under:

 [ORG]/[ORGFAC]/[ORGDEP]/Research Groups/

Name it “[ORG]-[ORGFAC]-[ORGDEP] [group-name]” so it is uniquely identifiable in dropdowns etc.

Create roles for the group

We suggest three roles for the research group

Under:

[ORG]/[ORGFAC]/[ORGDEP]/Research Groups/[GROUP]/roles

create the three roles like "[ORG]-[ORGFAC]-[ORGDEP] [group-name]_[role]", such as:

NTNU-NV-IBI Evolutionary Genetics and Fruitflies_adm
NTNU-NV-IBI Evolutionary Genetics and Fruitflies_member
NTNU-NV-IBI Evolutionary Genetics and Fruitflies_guest

Then:

Assign permissions to the roles

On the:

[ORG]/[ORGFAC]/[ORGDEP]/Research Groups/[GROUP]/roles

and its roles, select “Permissions…”. Grant the following permissions to the roles:

Add new lab

The lab is essentially a collection of computers. This is where the permission to read from the computers is granted

Add a computer

How to add members on a group

On all groups (both roles related and hierarchical/organizational ones) one can add members. These members will inherit any permissions or memberships that this group has. This makes it easy to manage which users get what permissions. Now, it is important to remember that the research- and/or subject area groups (in AURORA just called research groups) are the ones that owns the datasets. This means that one has to manage how those datasets are being accessed and used by the users. This is done through groups that are set up as roles related groups (they are still just of the entity type GROUP). These can be accessed under root/NTNU/[FACULTY]/[DEPARTMENT]/Research Groups/[GROUPNAME]/roles, where “FACULTY” is the faculty acronym (eg. NV), “DEPARTMENT” is the department acronym (eg. IFY) and “GROUPNAME” is the name of the research group in question. So a full example would be:

/root/NTNU/NV/IFY/Research Groups/NTNU-NV-IFY Nortem/roles

When a user creates a dataset he has to choose where to save the dataset. This choice is a group which then will own the dataset. The user will by creating the dataset get all permissions on that specific dataset (except DATASET_MOVE, DATASET_DELETE and DATASET_EXTEND_UNLIMITED), but not necessarily to the other ones that resides with the group (he might not even see them).

The research group itself resides under the department or section in question under a sub-category called “Research Groups”. In AURORA the departments will be found under root/NTNU/FACULTY, where FACULTY is the acronym for the faculty, such as NV (Faculty of Natural Sciences). Under the specific research group in question the datasets will reside:

Manage Entity Tree
Manage Entity Tree

In AURORA we have divided all research groups into three roles (or role groups):

The _adm-group is for the administrators of the research group and have all the relevant permissions on the groups datasets (including updating, deleting etc.). The _guest-group is for guests and only have the permission to create datasets on the group (cannot see the other ones, delete, read and so on - only his own datasets). The _member-group is for users that are members of that research group and they will have access to all the datasets created there, read them and their metadata, but will not have any change or deletion privileges except for those they have created themselves.

The role groups resides under the research group itself in a GROUP called “roles”.

Manage Entity Tree Role Groups
Manage Entity Tree Role Groups

When you find a role group which you have administrative permissions on (by residing in one of the _adm-role groups), you can add or remove members by left-clicking the symbol with the 3 horizontal lines and in the dropdown menu select “Members…”. The AURORA web-client will then open a separate tab or window with the “Member…”-window:

Add Group Member
Add Group Member

As the astute individual is able to find out upon seeing the “Members…”-window is that you can add both GROUP- and USER-entities. Please refrain from adding GROUP-entities if you do not know what you are doing. As best practice we recommend to only add USER-entities to the role groups.

The entities that are already members will be shown on top in the select-dialog in the “Members…” and separated from the tree by a long horizontal line. You can now add or remove members of the role group in question by selecting the user-entity in the select-dialog and then clicking on either the “Add” or “Remove”-button.

How to assign tasks

This option is only available for USER- and GROUP-entities. Its purpose is to assign existing task(s) to computer(s).

In order for the research group (or any group, except roles) or user to define what is to happen to the dataset after it has been closed, it is possible to assign task(s) to the computer that is the target of the dataset (both manual acquire and automated acquire ones). The tasks assigned in this way will have their “put” and “del” parts added to the distribution task of the dataset, so that they can in this way constitute a distribution policy for the user or the group in question. Multiple tasks can be assigned at the same time.

The user task(s) are only added from the user that creates the dataset. The assigned tasks from a group is only added if the owner group is the same as the assigned group. In all other cases they are not used.

This means that if you create a dataset yourself and you specify that the dataset belongs to “My Group” and that you are creating the dataset from “Computer A”, only tasks that have been assigned for your user to “Computer A” and tasks assigned for “My Group” to “Computer A” will be used for the “put”- and “del”-definitions for that dataset (check the documentation on How to edit tasks for more information). These parts of the task will be run as soon as the dataset has closed.

In order to assign such tasks do the following:

  1. Go to the user- og group entity that you wish to assign tasks for.
  2. Right-click on the three horizontal bars to the left of the entity and in the drop-down menu that appears , choose “Task Assignments…”.
  3. A new window will open with the “Task Assignment” options. First select the computer that is to have the assignment set for it (in the “Computers” selecton-box). Then allow some time for the browser to reload the page.
  4. Go to the “Tasks Assigned” box and you will now see any potentially assigned tasks there. If you wish to add or remove any task, go to it and select the task in question and left-click “add” or “remove” below its box.
Entity Assign Tasks
Entity Assign Tasks

How to assign templates

This option is only available for GROUP-entities. In order to assign template you need to have the permission GROUP_TEMPLATE_ASSIGN on the GROUP-entity in question.

One can assign any number of templates that one wishes on a GROUP-entity. The templates will take effect in the order that they are assigned and the last TEMPLATE-entity in the list will take precedence for any given metadata key that are defined several places in that list. The order of the list can easily be changed in the assign template view.

Please also be aware that templates in and of themselves are neutral and it is only when you assign them that they must be assigned as having effect for an entity type. A template can be assigned on any GROUP entity that one wish to and for differing entity types at the same time (but ensure that you do not create too much chaos in your entity tree and that it comes back with a vengeful karma).

In order to assign templates do the following:

  1. Locate the GROUP-entity in the “Manage Entity Tree” view that you wish to assign templates to.
  2. Left-click the three horizontal lines to the left of the GROUP-entity and select “Templates…”. A new window/tab appears with the assign template window.
Assign Template
Assign Template
  1. In the assign template-window that appear, select which entity type that you wish to edit assignments on (“assign entity type as”). DATASET is the default entity type and if you select another the page will reload in order to show you any existing template assignments for that type. Please allow time for that to happen.
  2. Go to the “Entity template pool” and select the template that you wish to assign.
  3. When you have found the template, select it by left-clicking on it once and then press the “Assign”- button. The page will reload and show your new assignment.

At this point it might be that you wish to rearrange the order of the template assignments (see above). It might also be that you do not wish to assign any new template, but that you only wish to change the assignments order.

In order to change the template assignment order do the following:

  1. In the “Current ENTITY-TYPE assignments” box select the template that you which to change the order of.
  2. Press either the “up” or “down”-button to move the selected template into the place you wish it to be. Press it several times to incrementally move the template into its correct place. Please give time for the page to reload between each time you press the “up” or “down” button.

How to create an entity

This option is available for: GROUP, COMPUTER, USER and TASK. It is also only available on an existing GROUP-entity wth the exception of TASK (which is also available on a USER-entity). The user must have the _CREATE-permission (GROUP_CREATE, COMPUTER_CREATE etc.) on the parent-entity in question.

Please note that what metadata must be answered in order to create an entity depends on which templates are assigned to the entity tree and have effect on the parent-entity and the entity type that you are attempting to create.

When you select to create an entity from the drop-down menu on another entity, that entity will become the created entity’s parent.

  1. Go to “Manage Entity Tree” on the main menu of the AURORA web-client. Select one of the create options:
Group Dropdown menu
Group Dropdown menu
  1. After you select one of the options and left-click your mouse on it and new screen appears and you might be asked about metadata that needs to be filled in (dependent upon the template(s) in effect). After any metadata is filled in and you hit “submit”, you should be provided with a success-message.
Create Group
Create Group

How to delete an entity

This option is available for: GROUP, COMPUTER, USER, TEMPLATE and TASK. The user must have the _DELETE permission (GROUP_DELETE, COMPUTER_DELETE etc.) on the entity in question.

Please note that deleting a USER-entity is strictly not possible as it will only be anonymized (GDPR). Please see the AURORA web-client privacy statement for more information.

  1. Click on the “Manage Entity Tree” on the main menu.
  2. Go to the entity you want to delete and click to open the dropdown-menu and select “Delete…”.
  3. You will be presented with a confirm screen. Click “Confirm” if you wish to actually delete it.
Delete Entity
Delete Entity

After you have confirmed the deletion, you should be presented with a success message.

How to edit metadata

This option is available for: COMPUTER and DATASET.

The possibility of editing metadata is available by selecting the “Metadata…”-option on the drop-down menu of the entity in question.

The web-client will then open a separate tab with the editing of the metadata. Which metadata appears in the window depends upon if any has been defined before and also any template(s) in effect will also influence the outcome.

When editing metadata for a dataset, you will see something like this (dependant upon templates in effect):

Edit Dataset Metadata
Edit Dataset Metadata

while editing computer metadata will be something like this:

Edit Computer Metadata
Edit Computer Metadata

An asterix on the right hand side of the metadata-key value input means that the given metadata-key is MANDATORY (see How to edit template). This means that you have to fulfill whatever requirement is on the key from any aggregated template(s). There is also a minus-button on the right hand side of the input-fields which makes it possible to remove that given metadata key or values on that key (a metadata key can have multiple values if so defined). Please note that even if you remove a field here that is defined as MANDATORY, it will fail upon checking and the metadata-script will come back and notify you if need be.

You can also add new keys to the metadata that are not defined by the template(s) in effect. You can do that by:

  1. Writing a new metadata namespace location in the input box labeled “Add metadata key” and click “Add”… or
  2. You can, if you are editing metadata for a dataset, select a preset from the same input box by left-clicking once to get focus on the input box (and for some browsers left-clicking once more) to get a dropdown menu of dataset presets.
Metadata add key
Metadata add key

Allowable characters in a metadata key name is: a-z, A-Z, 0-9, “.” (period) and “-” (hyphen). Please be careful with adding metadata keys outside of template, as it is desirable that users add data on the same metadata keys to ensure sensible searchability. We suggest that if you have long-term needs for certain metadata you should notify your lab administrator about this so he can ask the system operator for this to added in template, ensuring correct metadata namespace allocation as well as use.

This said, you can customize your own metadata keys by doing this. Also remember that for normal metadata namespace, all keys have to start with “.”, so eg. “.ThisAndThat”, “.MyKey” and so on and so forth. The reason for this is that everything before “.” is system metadata and not accessible for users in a normal way and should usually not be addressed.

When you have made the necessary adjustments, hit the “Submit”-button and your changes will be submitted. You should receive a message saying the metadata has been successfully updated.

Please note that all metadata handling happens in a separate script called metadata.cgi and that therefore you will experience some redirects between the page you are in and that page. Therefore allow for time to reload/redirect.

How to edit permissions

As of this writing it is possible to edit permissions on GROUP, USER-, TEMPLATE-, TASK- and DATASET-entities. AURORA has a rich set of permissions that can be set or removed on any given entity. These permissions are also inherited down the entity tree, which makes it possible to set permissions higher up that have effect further down. The permission structure of AURORA is divided into 4 categories:

In order to edit permission on an entity you go to the “Manage Entity Tree” and then locate the entity you want to edit, click the icon with the 3 horizontal lines to the left of its name and select “Permissions…”. Then a separate window will open:

Entity Tree Assign Permissions
Entity Tree Assign Permissions

The entities (in this example two USER-entites and a group-entity called Admins) that have permissions on this entity (in this case a GROUP) will be shown on the top of the select-dialog and separated from the rest of the tree by a long horizontal line. And to dispell (or conjure up more) confusion it should be noted that most of the objects in AURORA are just entities of various types which are arranged on a tree. They are essentially the same, but with a type attribute on them. How we treat and allow the user and system to treat these entites varies dependant upon that type. Confused? It might be more warped than time-space around a tiny black hole, but it is not something that needs to be contemplated too much and basically makes the AURORA system in its foundations very simple.

For practical reasons we can say that for most users, they should only edit permissions on datasets or the role groups of the research groups that owns the datasets (to make it effective for all datasets of that group). All other editing should be reserved for administrators.

As best practice we recommend to set the general permissions through the role groups (NAME_adm, NAME_guest and NAME_member - see the “How to add members on a group”-paragraph. If it is desired to have more granulated settings, we recommend that these are set on the datasets in question themselves (not on the research group or its role groups). This granulation can even be managed by the users that created the datasets themselves.

When you click any of the entities in the select-dialog the window will reload and the web-client will check if the selected entity has any permissions on the entity that you are editing. If there are any permissions, they will be displayed in the 4 aforementioned categories of “inherited”, “deny”, “grant” and “effective” (see above). One can then click the various permissions’ deny and grant checkboxes to change the permissions for that entity that you have selected and then click the button “Update” to make the changes active. When you press update, the web-client will try to update the new permissions and then show you the result after reload.

The inherited column will have a check mark in the middle of the []-clause if it has the permission in the respective row inherited from above in the tree. If nothing is inherited, there will be no checkmark there. If the entity that was chosen from the select-dialog has either deny or grant-permissions set on the entity that one is viewing (in the example above the “ROOT”-GROUP or top of the tree), the checkmarks will be checked or not. In the effective-column a check mark will be shown if the permission on the respective row is in effect or not (either through inheritance or grant or both - yes you can actually set grant and then inherit at the same time).

How to edit subscriptions

This options is only available for GROUP. The user must have the GROUP_CHANGE-permission on the group in question.

The subscriptions view sets two properties on a group:

  1. Which users and using which Notice-delivery (Email, SMS etc.) are to be notified when a Notification is invoked on that given group in the entity tree?
  2. What number of votes do the defined user(s) have when the Notification recides on that group-level in the entity tree?

An explanation of concepts is here required. All notifications sent by the Notification-service of AURORA must know which user(s) to send a notification to? All notifications are related to an entity in the entity-tree, typically to a dataset. What happens when the Notification-service attempts to sent a notification about eg. a dataset, is that it starts on the level of the dataset in the entity tree and then notifies the dataset creator of any messages.

However, certain notifications in AURORA requires a voting process to commence, to decide if the given action is to be executed or not? Typically examples of such notifications are when a dataset has expired or if it has been asked to be removed. These types will start a voting process on wheter to eg. remove the dataset or not? If the user notified on the dataset-level in the entity tree does not vote or are not able to cast enough votes him-/herself, the notification will be escalated to the next level in the entity tree (the parent group of the dataset at first). The Notification-service then tries to determine if any user(s) have voting rights on that level and which users using what Notice-delivery. It then sends to these users. If these users does not answer or do something sufficient to stop the process, it will be escalated to the level in the entity tree.

The subscription view allows the user to edit the settings of which users subscribe to which Notice-deliveries and furthermore what number of votes they have, if any on that given group-level.

In order to add, remove or change subscription settings, do the following:

  1. Go to “Manage Entity Tree” on the main menu.
  2. When viewing the entity tree, locate the group which you want to edit subscriptions on and then left-click the three horizontal lines to the left of the group and in the drop-down menu that appear, select “Subscriptions..”.
  3. In the subscriptions windows, you first have to select the user to add, edit or remove by going to the “User Votes” box and left-clicking the user you want to be working on. The page will then reload and display any Notice-delivery type assignments on that user in the “Subscriptions”-box.
  4. You can add or update a user with a number of votes, by selecting that user and filling in the “Votes” input and clicking “Update”. You can remove a user by selecting him/her and left-clicking the “Remove”-button.
  5. You can add or remove Notice-delivery subscriptions for that user by left-clicking the Notice-delivery in question and then clicking the “add” or “remove”-buttons under the subscriptions-box.

Please note that all Notice-delivery types and users that have been set appear above the long underlining in the box.

Edit Subscriptions View
Edit Subscriptions View

How to edit tasks

This option is available for: USER, GROUP and COMPUTER. The user must have the TASK_CREATE permission on the entity in question (USER, GROUP or COMPUTER).

Tasks are a way of defining distribution policies in AURORA. They tell the system where to fetch the data for the dataset (in case of automated acquire datasets) and where to put them (in the case of both automated acquire- and manual acquire datasets). It can even define deletion-processes.

The tree sub categories and order of execution of a task is:

  1. Get (fetch data from a remote area or computer when dataset is open and of an automated type).
  2. Put (put data to a remote area or computer after dataset has been closed, both manual/automated type).
  3. Del (delete data from a remote area or computer after dataset has been closed, both manual/automated type).

When a automated acquire dataset is being created, the AURORA system probes the path from (and including) the COMPUTER being the source of the dataset and up the entity tree to find the first available task. If it on any of the entities find one or more tasks, it will select the first task with which alphabetically comes first and use that as the main task for the get-, put- and delete-operations of the dataset. Furthermore, it will combine the data in the task with metadata from the COMPUTER, such as host-name and other parameters to form a complete set of parameters to perform whatever operations has been defined by the task.

In addition to this, it will also search the owner group of the dataset and the user being the creator of it, for any task assignments on the COMPUTER in question (see How to assign tasks). If it finds any assignment(s) here it will take those tasks’ put- and del-definitions (not get) and add to the put- and del-definitions from the task first selected as the main task. So, in other words, this mechanism enables the owner group and the user creating a dataset to have their own policies on where the dataset is to be copied after it has been closed. The put- and del-operations will only be run once the dataset has closed. Therefore they will also be executed for manual acquire datasets.

Edit Task
Edit Task

If you only want to do certain operations of a task’s sub-categories, only define data for that category and no one besides this. In the example above, only one “get”-operation has been defined and it is general, since it does not define the host-name or any host-parameters (which are taken from the COMPUTER metadata using this task).

For a user it makes sense, to only define put-operations. There is no way right now for the user or dataset owner group to define any get-operations, since the get-part is defined globally at the top of the entity tree. We aim in the future to provide the possibility of executing tasks manually outside the create dataset workflow. Right now this is not possible.

You can define as many get-, put- or del-operations that you want in one, single task. Also note that within the sub-category (get-, put- or del) the order of the operations matter, since the topmost one will be executed first.

The name-parameter of an operation (not the name of the task) is just a label for the user and has no value to system itself. The Store-parameter says which transfer protocol is to be used. The most common one is the Store::RSyncSSH, which is the safest and preferred method. We aim to also provide access to cloud protocols in the future, such as OneDrive, GDrive etc.

Another important parameter is computer, which potentially defines which computer the operation is to be performed on. This parameter can be “NONE SELECTED”, which means that none has been set. In the case of running a task as part of a “create dataset” event, the computer will be filled in by the REST-server no matter what the task may say in the first get-operation. For other operations, it needs to be filled in and the computer to be registered in AURORA.

In addition to these, you have to parameters with sub-parameters called “classparam” and “param”. “classparam” parameters to the Store-class itself. Usually it is enough with the “authmode” parameter here (see below for an explanation). The “param” parameters are parameters to the Store-class when connecting to the COMPUTER in question. This can be anything, but the most common, as the image above shows, is port, authentication information (in this case the private ssh key used) and lastly the username being used to connect with. All of these parameters, including the ones in classparam, can be overridden by the metadata of the COMPUTER in question.

You can also get the edit task view to show you which “param” are necessary given any set “classparam” of the chosen Store-class. This is achieved by setting the classparam that you want to have, then reload the page and then expand the caret called “Store-class required params”. It will show you which key(s) are required and what are the defaults if not specified. It will also show you the regex-requirements for the param key (for those of you who happen to be lucky enough to have achieved fluency).

In the example above, the rest of the information in “param” is provided by the COMPUTER-metadata, such as the host and the public key certificate.

As goes the mentioned “classparam” called “authmode”. It is valid across all the Store-classes and defines how the authentication with the host is performed. Valid modes are digits 1-4 (although not all might have meaning to all Store-classes):

  1. Password (it is expected to provide a param called “password”).
  2. Password-file (it is expected to provide a param called “passwordfile”).
  3. Authentication key (it is expected to provide a param called “privatekey”).
  4. Authentication key-file (it is expected to provide a param called “privatekeyfile”).

It is advised that the expected parameters of these 4 modes are provided in the COMPUTER metadata if tied to a COMPUTER in the entity tree.

Please also note that if you use password-files they have to have names that end in “.pwfile”. For certificate-files the private-key needs to end in “.keyfile”. Furthermore, all of these types of parameters are sandboxed and can only fetch files from within the location specified in the sandbox-parameter of the AURORA-system. This means that these parameters are only for administrators and we advise users to not use password- and/or certificate-files in their tasks. We also urge administrators with being careful about using password-files, since all users can create tasks and in that way access all your password-files in the sandboxed area and potentially deliver that password as a password to a server they control and in this way be able to access your password and thus potentially compromising your servers.

But their ability to access computers with your certificate-files are limited, since it requires them to also have the COMPUTER_READ-permission on the computer they are attempting to access.

Please note that tasks have permissions on them and are not readable without the correct permissions by any user. However, we still strongly advise caution with saving passwords and authentication details in the task. We advice against using AURORA as a authentication repository since the side-effects in the event of a security breach can be huge.

In the case of connecting to services that allow tokens or access keys to be generated (such as cloud-services like DropBox, OneDrive and so on and so forth), we advise that you instead create such token(s) for a part of the service and store that in the task, so that even if the task-data is compromised you do not compromise your whole cloud acccount.

Common params to know about with Store-types (all are lower case):

Not all of these are necessarily used by the Store-type in question, but they are recurring in several. The parameter “remote” is always there, but it is being auto-filled when running get-operations as part of creating a dataset. “remote” always signifies the remote computer, independent upon if the operation is get, put or del.

These permissions are needed on the computer in question to run the various operations:

How to edit template

This option is only available on a TEMPLATE. Its purpose is to edit the templates that either are or will be part og entity tree through template assignments (see the paragraph (How to assign templates)[#how-to-assign-templates)). AURORA have the ability to define any number of templates consisting of any number of key-definitions.

Templates and their settings are inheritied down the entity tree, so that a template assigned on top of the tree will have effect down to the bottom of it, if no other templates override its settings on the way down. The overrides do not happen per template, but per metadata key in the template, so that the aggregated result of templates are the aggregated collection of template’s definition for any given metadata key.

Lets take as an example the metadata key for an entity’s creator:

.Creator

As you can see the metadata key start with a dot (“.”). All open metadata must reside under “.” and then something. Metadata above “.” is considered system metadata and cannot be changed through normal REST-server metadata methods. Nor can this data be accessed through the REST-serves normal methods. It is therefore a rule in AURORA that all metadata defined by the user is to reside in “.” namespace which is considered “open”. Templates can define metadata anywhere in the namespace, but be careful with the differentiation between open and not open metadata if you want the user to be able to read and or change the metadata.

Furthermore, templates in and of themselves do not have any types (DATASET, GROUP, COMPUTER etc.). Templates gains validity for an entity type once it is assigned to that type (see the “How to assign templates”-paragraph).

Now, once you edit a template there are the following fields to contend with for each metadata key that is defined:

Edit Template
Edit Template

The meaning of the various flags are as follows:

Please note that if a value is required (through MANDATORY) and none is defined, it will select the first element from default (if available).

As already mentioned, the PERSISTENT-flag should be used with caution. Once a value is set it is impossible to change it and the only way to circumvent the issue is to change the template that defined it PERSISTENT, update the value and then set the template back to PERSISTENT.

As can be seen, the use of multiple defaults can be used to define drop-downs and checkboxes by using the SINGULAR and MULTIPLE-flag accordingly. Please note that SINGULAR and MULTIPLE-flag at the same time is not allowed and SINGULAR will be preferred.

On top of the template-view one can see a heading called “Assignments”. This expandable header will you an overview of where the template being edited has been assigned and as what type it has been assigned as (if any at all). This can be practical when planning what changes to do on a template and better understand their potential implications.

Template Assignments
Template Assignments

In general, be careful with giving too many people the permissions to edit or create templates. Defining them is a careful prosess and assigning them even more so. In addition it might lead to quite undesired results.

Some general steps:

Please not that in the case of a metadata key that is both MANDATORY and SINGULAR, AURORA will select the first default value in the default-values defined if none is specified by user when entering metadata. This is how it satifies these requirements. Be therefore careful to add your desired default value first in the case of such scenarios.

When you are finished making changes, press the “Change”-button and the changes will be written to the database.

How to move an entity

This option is available for: DATASET, GROUP, COMPUTER, USER, NOTICE, INTERFACE, STORE, TEMPLATE and TASK. The user must have the _MOVE permission (GROUP_MOVE, COMPUTER_MOVE etc.) on the entity being moved and the _CREATE permission (GROUP_CREATE, COMPUTER_CREATE etc.) on the new parent entity.

Please also remember that not all entities can be moved to any entity type that you would like. Here is a list of constraints:

As you can see the restraint is centered around GROUP being the parent. These constraints reflect the restraint that you also have creating these entities.

When you have located the entity you want to move:

  1. Left-click the three horizontal lines to bring up the drop-down menu on the entity.
  2. Select the option “Move…”.
  3. On the the windows/tab that appears, select the destination parent for the entity.
Move Dataset
Move Dataset
  1. When the parent has been select press the button “Move” and a success message should appear upon success (or some failure message, like missing permissions or wrong parent entity type).

How to rename an entity

This option is available for: GROUP, COMPUTER, DATASET, TASK and TEMPLATE. The user must have the _CHANGE permission (GROUP_CHANGE, COMPUTER_CHANGE etc.) on the entity being renamed.

Please note that some entities might enforce rules as to what you can rename the entity to. An example of this is TEMPLATE-entities’s name that has to be unique and even the name might be restricted by TEMPLATE-definitions (the constraints of the template definitions are not decided by templates, but the name of a template is located in its metadata and might have templates in effect for it).

In order to rename an entity do the following:

  1. Go to the entity in the tree that you wish to rename.
  2. Left-click the three horizontal lines to the left of the entity and the drop-down menu appears.
  3. Select “Rename…” and left-click again. A window/tab appears with the rename window.
Rename Group
Rename Group
  1. Write the new name that you wish to have and click the “Rename”-button to effect the change.

A new window should appear giving you a success message or it will inform of any issues with renaming you entity, such as missing permissions or invalid name. Please take steps to correct any issues and try again.

How to search for an entity

Sometimes it might be difficult to know where in the entity tree an entity resides and therefore the web-client supports searching for entities by expanding the tree in relevant places. Please note that the searching mechanism is very rudimentary and is only meant to assist in finding what one is looking for in the tree.

You can search for an entity in two ways:

  1. By entering the entity’s unique ID in the search input.
  2. By entering the entity’s full or part name in the search input.

To search one enters the relevant information in the “Search for entity”-input box on the top of the “Manage Entity”-page and hitting ENTER. If the entered information is a number it will be interpreted as a unqiue entity ID and it will try to find that entity in the tree.

Search for entity by ID
Search for entity by ID

If one instead wants to search for an entity by its name, one enters the name of the entity in the “Search for entity”-input box. If one only knows part of the name, one can end the value with a wildcard (*). It will then be interepreted as a wildcard search that is case-insensitive and starts with the entered value.

Search for entity by name
Search for entity by name

Please be aware that when searching without a wildcard, you need to enter the exact name, including the case. When searching with a wildcard you do not need to worry about the case, but be aware that if the search matches several entities, all of those entities sub-trees will be expanded.

The matching entity(ies) might not get proper focus when you have searched and/or it might consist of several hits. In case of then finding it on the page, please use the search-option in the browser itself to find it on the page.

How to set dataset lifespan policies

In order for the the AURORA-system to know when a dataset is supposed to expire, it needs to have this information when creating datasets. For this to happen it needs to be defined socalled “lifespan”-policy(-ies) in the entity tree. These policies are templates with settings in the namespace for the lifespan- and expire settings. In order to adjust the expire time of a dataset, the user needs to have the DATASET_DELETE-permission.

These settings are located in the keys called:

system.dataset.open.lifespan
system.dataset.open.extendmax
system.dataset.open.extendlimit
system.dataset.close.lifespan
system.dataset.close.extendmax
system.dataset.close.extendlimit

so in order to define this policies, one has to create templates that sets a default value for these keys. The same settings exists for both “open” and “close” status of the dataset. The reason for this is that one can have the dataset exist for a certain amount of time in an open status before the AURORA-system triggers a close-dataset notification that the user(s) have to take into account. The reason for this mechanism is that all datasets should at some point be closed.

The “system.dataset.open.lifespan” sets the number of seconds that a dataset is to exist after it has been opened and before it is closed. The “system.dataset.open.extendmax” sets the number of seconds that the user(s) are allowed to extend the open dataset when asking for an extension upon eg. the dataset being asked to close. The “system.dataset.open.extendlimit” is the maximum number of seconds the dataset can exist after which the dataset cannot be extended anymore by the user(s). After the maximum extension limit has been met, the only way to prevent an open dataset from eventually being closed is either to discard it or have a user with the DATASET_EXTEND_UNLIMITED-permission on the dataset in question.

The “system.dataset.close.lifespan” sets the number of seconds that the dataset is to exist after it has been closed. The “system.dataset.close.extendmax” sets the number of seconds that the user(s) are allowed to extend the closed dataset when asking for an extension upon eg. the dataset being ready for expiration/removal. The “system.dataset.close.extendlimit” is the maximum number of seconds the dataset can exist after which the user cannot extend it anymore. After the maximum extension limit has been met, the only way to prevent a closed dataset from eventually being removed is to have a user with the DATASET_EXTEND_UNLIMITED- permission on the dataset in question.

It is natural that the limit-setting for an open dataset is much lower than for a closed one (although it is completely flexible). Here are some example settings:

system.dataset.open.lifespan       = 259200 # three days
system.dataset.open.extendmax      = 86400 # one day at a time
system.dataset.open.extendlimit    = 604800 # maximum extension time is a week after open
system.dataset.close.lifespan      = 2592000 # a month after closing the dataset
system.dataset.close.extendmax     = 1209600 # you can extend for up to 2 weeks at a time
system.dataset.close.extendlimit   = 15552000 # you cannot extend for more than 6 months
Dataset Lifespan Template
Dataset Lifespan Template

When the template has been created and defined, one has to assign the template where one wants it to have effect in the entity tree. It is also possible to set these settings in different templates if so wished for. The settings will then be inherited down the tree from where it they were assigned, except if another template or templates happen to override it.

Please note that even though the settings will be written to the metadata of datasets, when it is being used it will only be read from the templates and not the metadata. Also take care that when assigning such templates that they are assigned as being valid for the entitytype “DATASET”. Also take care not to enable any of the template-flags, except “NONOVERRIDE” if that is desirable in any situation. The reason for not choosing any of the others is because this settings is not supposed to be editable metadata, but taken directly from the templates, so to avoid any issues it is best to not tick any of these flags.

For guides on how to create and edit templates, see the How to create an entity and How to edit template-paragraph. For assigning templates, please see How to assign Templates.

By defining and assigning such templates one can control the lifespan policy down to the individual owner groups of datasets if one chooses to.

How to set notification interval policies

In order for the AURORA-system to know when to notify the users about datasets that are about to expire, one needs to set interval policies. These are defined in the form of templates assigned to the entitytype “DATASET”.

The namespace for the setting is in the key:

system.dataset.notification.intervals

so in order to define these intervals one has to create template(s) that set default values for these intervals. This is done by assigning multiple values on the default-setting of the template.

Dataset Notification Intervals Template
Dataset Notification Intervals Template

Please note that one does not need to have the default values in ascending order. The order of the notification intervals are not important. Please also avoid setting any of the template flags, except NONOVERRIDE if that is needed in any situation. The reason for this is that the settings in this template are used as a template and not as metadata.

For guides on how to create and edit templates, see the How to create an entity and How to edit template. For assigning templates, please see How to assign templates.

By defining and assigning such templates one can control the notification interval policies down to the individual owner groups of datasets if one chooses to.

How to set remote control policies

In order for the AURORA-system to allow for remote controlling its computers, one needs to set one or more remote control policies. These are defined in the form of templates assigned to the entitytype “COMPUTER”.

The namespace for the setting is in the keys:

system.gatekeeper.host
system.gatekeeper.keyfile
system.gatekeeper.knownhosts
system.gatekeeper.protocols.RDP
system.gatekeeper.protocols.VNC
system.gatekeeper.script
system.gatekeeper.username

The host-setting is the host-name of the gatekeeper-server that is running the gatekeeper-service. The keyfile-setting is the name of the private keyfile to use when connecting to the gatekeeper-server. The public part of that key needs to be registered with the gatekeeper-server in the user of choice’s authorized-keys-file. The knownhosts-setting is the public key of the gatekeeper-server (it should be in the format of: host-name knownhosts-value). The host-name in knownhosts must match the host-name in the host-name parameter. The script-setting is the script to run on the gatekeeper-service to create/open the tunnel. The “username”-parameter is the username to use when logging into the gatekeeper-service with SSH and running the gatekeeper-script.

Lastly, the system.gatekeeper.protocols.-settings are the valid connection protocols for the template. It can be any name that one wants after protocols., but they need to be uppercase. The value part of each key in protocols. is the port number of the protocol in question, so that eg. the VNC-protocol would be:

system.gatekeeper.protocols.VNC = 5900

Please note that not any of the template-settings for remote control should use any of the template flags. Do this and you might have unintended side-effects and problems.


For further questions, contact hjelp.ntnu.no