Rspec details for AM developers¶
General¶
There are 3 types of RSpecs, request, manifest and advertisement RSpecs.
All RSpecs must be valid XML, and for each type a slightly different schema is used. Not that “valid XML” is defined as:
- It is well formed XML (closing and ending tags match, etc.). This means the XML can be parsed without error by an XML parser.
- The XML follows the specified schema. This means that all rules specified in the schema are followed, and the rules conserning namespace are followed.
The XSD files that define the RSpec schema for “geni version 3 RSpecs” can be found at the URI used to identify the namespace: http://www.geni.net/resources/rspec/3 .. note:: There are 3 root xsd files at that location: one for the request RSpec schema, one for the manifest RSpec schema and one for the advertisement RSpec schema. .. note:: The URI used to define an XML namespace does NOT need to host any XSD files. This is often done as it is a convenient location, but the URI can also be only an identifier. Research XML namespaces if this is confusing.
The Rspec schema’s allow a lot of freedom: they allow adding custom elements at different places in the RSpec. However, this is only allowed if the additions are in a different namespace than the “geni v3 rspec” namespace. This means you need to define a custom namespace for any RSpec extension you make!
Note that there are certain rules on how an AM should handle unknown RSpec extensions in RSpec requests. See the section on requests RSpecs below.
Choosing your component manager urn¶
In the RSpecs, resources belong to a certain “component manager”. This is the authority responsible for these resources, and thus it is typically your AM itself.
This binding is represented in each RSpec, using the component_manager_id
attribute, which has an URN as value.
More info on the exact format of the URNs used can be found at http://groups.geni.net/geni/wiki/GeniApiIdentifiers.
Put simply, the URN using for the component_manager_id
attribute is either of the form urn:publicid:IDN+TOPLEVELAUTHORITY:SUBAUTHORITY+authority+cm
or of the form urn:publicid:IDN+TOPLEVELAUTHORITY:SUBAUTHORITY+authority+am
. (both ending, cm
and am
are in use at different testbeds. We recommend am
, but it doesn’t really matter.)
It is best NOT to use :SUBAUTHORITY
at all, unless you really know you need it. So the typical urn is: urn:publicid:IDN+TOPLEVELAUTHORITY+authority+cm
The TOPLEVELAUTHORITY
part of the URN is preferably the top level DNS name of your AM. In some cases, a nickname is also ok. Both localhost
or 127.0.0.1
are not allowed. IP addresses should be avoided.
This URN is used in the following places:
- In the
component_manager_id
attribute of node elements in the advertisement RSpec. - In the
component_manager_id
attribute of node and link elements in the request and manifest RSpec. - Optionally: in the
GetVersion
reply, in thevalue
urn
field.
A few examples of real component manager URNs:
urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm
urn:publicid:IDN+utah.cloudlab.us+authority+cm
urn:publicid:IDN+fuseco.fokus.fraunhofer.de+authority+cm
urn:publicid:IDN+instageni.cs.princeton.edu+authority+cm
urn:publicid:IDN+exogeni.net+authority+am
urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am
RSpec basics: sliver_type and exclusive¶
The basic rspec format allows expressing raw bare metal hardware nodes as well as VMs/container nodes. The sliver_type
element is used, as well as the exclusive
attribute. Support for both is mandatory in each of the RSpec types.
The sliver_type
element is used to specify which sort of node is requested or available. The 2 common cases are bare metal, and virtual machines. You are free to pick a sliver_type name that makes sense for your testbed.
For bare metal nodes, emulab uses raw-pc
as sliver_type
name.
Some examples of sliver_type
names for virtual machines:
default-vm
is defined within geni as a “convenience” sliver type. Each AM that supports VMs is supposed to replace this by the default VMsliver_type
for that AM. This is makes it easier to write portable RSpecs that can be executed on multiple testbeds, without changing thesliver_type
for each testbed. It is useful to support this feature if you AM supports VMs, but it is not mandatory.emulab-xen
andemulab-openvz
are the VMsliver_type
used by emulab (and thus instageni).xo.tiny
,xo.small
,xo.medium
,xo.large
,xo.xlarge
are the types used by exogeni. Note that they map to the “size” of the VM.docker-container
is used by the ilab.t docker AM.
The exclusive
attribute is always true for bare metal hardware, since you always get exclusive access to it.
Example:
<node client_id="node0" exclusive="true" component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm">
<sliver_type name="raw-pc"/>
</node>
For VMs, the user can specify the node exclusive
attribute of the node to be either false or true. exclusive="false"
means that other users can get a VM hosted on the same physical machine. exlusive="true"
means that ohter users can not get a VM hosted at the same physical machine. For most cases with VMs or containers, exclusive="false"
is always used.
Example:
<node client_id="node1" exclusive="false" component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm">
<sliver_type name="default-vm"/>
</node>
A request with exclusive="false"
for a node where that is not supported by the testbed, should not result in a failure. The testbed should just change it to true in the manifest. However, a request for exlusive="true"
for a node for which the testbed does not support it, should result in an error.
In the advertisment RSpec, the AM needs to list each allowed sliver_type
for each node. The exclusive
attribute takes a different meaning in the advertisement. exclusive="false"
means that a request may never request exclusive access to a node. exlusive="true"
means a request may request exclusive access.
RSpec basics: Disk Images¶
Disk images are added as part of a sliver type, because they can differ depending on sliver type.
In an advertisment RSpec, the AM should list all possible disk images within each sliver element of each node. Note that this means there will be a lot of repetition! (That is not a nice feature of the “geni version 3 rspec” schema, but there is no way around it.)
It is not required that an AM supports this functionality. In case an AM does not support it, it should fail with the correct error when a disk_image
is specified in a request RSpec.
Example in a request:
<node client_id="node0" exclusive="true" component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm">
<sliver_type name="raw-pc">
<disk_image name="urn:publicid:IDN+wall2.ilabt.iminds.be+image+emulab-ops:CENTOS65-64-STD-BIG2"/>
</sliver_type>
</node>
In practice, the name
field of a disk image contains an URN. jFed currently can only handle disk images containing an URN. The authority
part of the URN should refer to the testbed, the type
part of the URN is always “image”.
Note that the geni version 3 rspec schema allows the following optional attributes to be specified in the advertisement RSpec (they are allowed in the request RSpec, but don’t make much sense there):
os
: The name of the OSversion
: The version of the disk image, or OS (not really specified anywhere which one).description
: A textual description of the disk image. You can put anything that is helpfull for users here.url
: TODO: emulab supports disk images from other testbeds. This is not yet explained here.
To have jFed support disk images for a testbed, the jFed central config needs to be updated. Contact the jFed developers for this.
RSpec basics: Specific nodes (component_id)¶
Optionally, an AM can allow (or require) requests that demand a specific piece of hardware. This is done using the component_id
attribute. If this attribute is not specified, the testbed has to either fail the request (informing the user that component_id
is mandatory), or pick a suitable piece of hardware automatically.
If it makes sense for users not to choose a specific resource themselves, you should support automatic picking of resources. For example, on the vwall2 testbed, you just want “any bare metal node”, you don’t (usually) care which one you get, so “component_id” is optional. If a user should always be aware which resource they use you should just fail. For example, on the wilab2 testbed, the location of the wireless node is important, so users need to always hand pick the nodes, and “component_id” is thus mandatory.
Note that the advertisement RSpec can contain an additional component_name
attribute, which has a nickname for a node. This is typically the same as the last part of the component_id
. This component_name
attribute should NOT be used in a request RSpec. An AM should not require it, nor use it to assign specific nodes, only component_id
should be used for that.
Example in a request:
<node client_id="node0" exclusive="true"
component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm"
component_id="urn:publicid:IDN+wall2.ilabt.iminds.be+node+n082-01">
<sliver_type name="raw-pc"/>
</node>
RSpec basics: Hardware type¶
Optionally, an AM can allow (or require) requests that demand a specific type of hardware, but that do not specify the specific piece of hardware. This is done using the hardware_type
element. If this element is specified and no component_id
is specified, the testbed has to either fail the request (informing the user that hardware_type
is not supported), or pick a suitable piece of hardware, matching the type, automatically.
Note
It is important to unserstand the difference between hardware_type
and sliver_type
.
Example in a request:
<node client_id="node1" exclusive="true" component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm">
<sliver_type name="raw-pc"/>
<hardware_type name="gpunode"/>
</node>
In an advertisement RSpec, the hardware_type
should be specified, if this feature is supported in the request RSpec. Note that it is allowed to specify multiple hardware_type
elements in an advertisement RSpec. It is ok to do so if that makes sense for your AM. But if possible, it’s nice to keep it simple and specify only 1 hardware_type
per node.
Advertisement Examples: Bare metal access¶
Scenario: You want to give users “bare metal access” to nodes. This means they get full access, not access to a VM or container. The user is the only one getting access to the machine, typically as “root” or as a user with sudo privilegdes.
Advertisment RSpec example:
<node component_id="urn:publicid:IDN+example.com+node+pi1" component_manager_id="urn:publicid:IDN+example.com+authority+am" component_name="pi1" exclusive="true">
<sliver_type name="raw-pc">
<disk_image name="urn:publicid:IDN+example.com+image+raspbian"/>
<disk_image name="urn:publicid:IDN+example.com+image+arch"/>
</sliver_type>
<hardware_type name="pc-raspberry-pi"/>
<available now="true"/>
<location country="NU" latitude="0.0" longitude="0.0"/>
</node>
Things to note:
exclusive
is true, because users get full acesss to the node.sliver_type
israw-pc
. This is the typical sliver type used to represent bare metal access to “PC like” hardware.hardware_type
ispc-raspberry-pi
. The hardware type name should be something that identifies the type of hardware to a user. In this case, the hardware is a Raspberry Pi computer. Another example are thepcgen01
,pcgen02
andpcgen03
types, which are used on the imec virtual wall, and means “A PC of generation 1, 2 or 3”, users can then look up in the testbed documentation what the full specifications of each “generation” is.available now="true"
means that this hardware is currently available.location
is used to specify the location of the node.country
is the country code, in this case, and invalid code, refering to “null island”. Off course you should use the real coordinate of your testbed node here. It is OK to use the same coordinate for all testbed nodes.
Advertisement Examples: Simple VM’s¶
This example shows how you can offer a single type of VM.
TODO: explain
Advertisment RSpec example:
<node xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1" component_id="urn:publicid:IDN+example.com+node+vmhost1" component_manager_id="urn:publicid:IDN+example.com+authority+am" component_name="vmhost1" exclusive="false" >
<sliver_type name="xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu14"/>
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<hardware_type name="pc-vmhost">
<emulab:node_type type_slots="20"/>
</hardware_type>
<available now="true"/>
<location country="NU" latitude="0.0" longitude="0.0"/>
</node>
Matching Request RSpec example:
<rspec xmlns="http://www.geni.net/resources/rspec/3" type="request" xmlns:jfed="http://jfed.iminds.be/rspec/ext/jfed/1">
<node client_id="vm-one" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="false">
<sliver_type name="xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<location xmlns="http://jfed.iminds.be/rspec/ext/jfed/1" x="100.0" y="100.0"/>
</node>
<node client_id="vm-two" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="false">
<sliver_type name="xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<jfed:location x="100.0" y="100.0"/>
</node>
</rspec>
Advertisement Examples: Complex VM’s: multiple sizes¶
This example shows how you can offer different “sizes” of VM, where each size has a different number of CPU cores, memory, etc.
Each size will typically take a different number of type_slots
, for example, a “tiny-vm” will take 1 type slot, and a “large-vm” will take 5 type slots.
TODO: explain more
Advertisment RSpec example:
<node xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1" component_id="urn:publicid:IDN+example.com+node+vmhost1" component_manager_id="urn:publicid:IDN+example.com+authority+am" component_name="vmhost1" exclusive="false" >
<sliver_type name="vm-tiny">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu14"/>
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<sliver_type name="vm-medium">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu14"/>
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<sliver_type name="vm-big">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu14"/>
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<hardware_type name="pc-vmhost">
<emulab:node_type type_slots="30"/>
</hardware_type>
<available now="true"/>
<location country="NU" latitude="0.0" longitude="0.0"/>
</node>
Matching Request RSpec example:
<rspec xmlns="http://www.geni.net/resources/rspec/3" type="request" xmlns:jfed="http://jfed.iminds.be/rspec/ext/jfed/1">
<node client_id="vm-one" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="false">
<sliver_type name="vm-tiny">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<location xmlns="http://jfed.iminds.be/rspec/ext/jfed/1" x="100.0" y="100.0"/>
</node>
<node client_id="vm-two" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="false">
<sliver_type name="vm-tiny">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<jfed:location x="100.0" y="100.0"/>
</node>
</rspec>
Advertisement Examples: Specialised hardware connected to single VM box¶
TODO: explain
<node component_id="urn:publicid:IDN+example.com+node+usrp2" component_manager_id="urn:publicid:IDN+example.com+authority+am" component_name="usrp2" exclusive="true">
<sliver_type name="usrp-vm">
<disk_image name="urn:publicid:IDN+example.com+image+plain"/>
<disk_image name="urn:publicid:IDN+example.com+image+gnuradio"/>
</sliver_type>
<hardware_type name="pc-usrp"/>
<available now="true"/>
<location country="NU" latitude="0.0" longitude="0.0"/>
</node>
Advertisement Examples: Combining Bare metal and VMs on a single node¶
TODO: explain
Hint: look at emulab advertisement RSpecs.
Note that not all possible scenarios can be expressed in this format. Also, for some scenarios that can be expressed, not all info is specified. RSpec has grown historically, and does not offer every flexibility.
This is a complex case. It’s good to note that each sliver_type
works with one or more hardware_types
, and will not work with certain other hardware_types
. This info is not specified anywhere.
In the example below, the hardware_type
“pc-gen1” and “pc-gen2” have a sliver_type
“raw-pc”, and the hardware_type
“pc-vmhost” has sliver_type
“small-xen-vm” and “big-xen-vm” (these match resources allocated to the VM, such as cores and memory. As an example, “small-xen-vm” takes 2 type slots, and “big-xen-vm” takes 5 type slots).
Example advertisement:
<node xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1" component_id="urn:publicid:IDN+example.com+node+nodeA" component_manager_id="urn:publicid:IDN+example.com+authority+am" component_name="nodeA" exclusive="true" >
<sliver_type name="small-xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu14"/>
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<sliver_type name="big-xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu14"/>
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
<sliver_type name="raw-pc">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
<disk_image name="urn:publicid:IDN+example.com+image+arch"/>
</sliver_type>
<hardware_type name="pc-vmhost">
<emulab:node_type type_slots="20"/>
</hardware_type>
<hardware_type name="pc-gen2">
<emulab:node_type type_slots="1"/>
</hardware_type>
<available now="true"/>
<location country="NU" latitude="0.0" longitude="0.0"/>
</node>
Example request for any bare metal node of type “pc-gen2” (there could be bare metal nodes of type “pc-gen1” offering sliver_type
“raw-pc” as well):
<node client_id="gen2nodeA" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="true">
<sliver_type name="raw-pc">
<disk_image name="urn:publicid:IDN+example.com+image+arch"/>
</sliver_type>
<hardware_type name="pc-gen2"/>
</node>
Example request for bare metal access to “nodeA”:
<node client_id="nodeA" component_id="urn:publicid:IDN+example.com+node+nodeA" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="true">
<sliver_type name="raw-pc">
<disk_image name="urn:publicid:IDN+example.com+image+arch"/>
</sliver_type>
</node>
Example request for a “big” VM on any VM node:
<node client_id="vm1" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="false">
<sliver_type name="big-xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
</node>
Example request for 2 VM nodes on “nodeA”, with exclusive hardware access:
<node client_id="small-vm-a" component_id="urn:publicid:IDN+example.com+node+nodeA" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="true">
<sliver_type name="small-xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
</node>
<node client_id="big-vm-b" component_id="urn:publicid:IDN+example.com+node+nodeA" component_manager_id="urn:publicid:IDN+example.com+authority+am" exclusive="true">
<sliver_type name="big-xen-vm">
<disk_image name="urn:publicid:IDN+example.con+image+ubuntu16"/>
</sliver_type>
</node>
Request RSpec¶
This section describes which request RSpec features you can use for your AM. jFed will automatically offer a lot of functionality when this is done correctly.
Some things to take into account in request RSpecs:
- Each node will have exactly one sliver_type in a request.
- Each sliver_type will have zero or one
disk_image
elements. If your testbed requiresdisk_image
or does not support it, it should handle bad requests RSpecs with the correct error. - The
exclusive
element is specified for each node in the request. Your testbed should check if the specified value (in combination with thesliver_type
) is supported. and return the correct error if not. - The request RSpec might contain links that have a
component_manager
element that maches your AM. If you AM does not support links, it should return the correct error.
Some information will be in a request RSpec, that needs to be ignored and copied to the manifest RSpec unaltered. This is important to do correctly.
- A request RSpec can contain nodes that have a
component_manager_id
set to a different AM. You need to ignore these nodes, and copy them to the manifest RSpec unaltered. - A request RSpec can contain links that do not have a
component_manager
matching your AM (links have multiple component_manager_id elements!). You need to ignore these links, and copy them to the manifest RSpec unaltered. - A request RSpec can contain XML extensions in nodes, links, services, or directly in the rspec element. Some of these your AM will not know. It has to ignore these, and preferably also pass them unaltered to the manifest RSpec.
Example request:
<rspec xmlns="http://www.geni.net/resources/rspec/3" type="request" generated_by="jFed RSpec Editor" generated="2019-02-05T07:52:31.901+01:00" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd ">
<node client_id="node0" exclusive="true" component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm">
<sliver_type name="raw-pc"/>
<location xmlns="http://jfed.iminds.be/rspec/ext/jfed/1" x="100.0" y="100.0"/>
</node>
<node client_id="node1" exclusive="false" component_manager_id="urn:publicid:IDN+wall1.ilabt.iminds.be+authority+cm">
<sliver_type name="xen-vm"/>
<location xmlns="http://jfed.iminds.be/rspec/ext/jfed/1" x="200.0" y="100.0"/>
</node>
</rspec>
Example manifest from wall2:
<rspec xmlns="http://www.geni.net/resources/rspec/3" type="manifest" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd ">
<node client_id="node0" exclusive="true" component_manager_id="urn:publicid:IDN+wall2.ilabt.iminds.be+authority+cm" component_id="urn:publicid:IDN+wall2.ilabt.iminds.be+node+n088-77" sliver_id="urn:publicid:IDN+wall2.ilabt.iminds.be+sliver+145278">
<sliver_type name="raw-pc">
<disk_image name="urn:publicid:IDN+wall2.ilabt.iminds.be+image+emulab-ops:UBUNTU18-64-STD"/>
</sliver_type>
<services>
<login authentication="ssh-keys" hostname="n088-77.wall2.ilabt.iminds.be" port="22" username="myuser"/>
<login authentication="ssh-keys" hostname="n088-77.wall2.ilabt.iminds.be" port="22" username="otherusr"/>
</services>
<location xmlns="http://jfed.iminds.be/rspec/ext/jfed/1" x="100.0" y="100.0"/>
</node>
<node client_id="node1" exclusive="false" component_manager_id="urn:publicid:IDN+wall1.ilabt.iminds.be+authority+cm">
<sliver_type name="xen-vm"/>
<location xmlns="http://jfed.iminds.be/rspec/ext/jfed/1" x="200.0" y="100.0"/>
</node>
</rspec>
Note that the node node1
is copied unaltered to the manifest by the wall2 AM. The element <location>
in node0
is also copied unaltered to the manifest.
Manifest RSpec¶
Special case: SSH login through SSH proxy¶
Not all testbeds offer direct SSH access to all nodes. A typical solution is to offer access through an SSH proxy (sometimes called “skip host”).
To login to the requested resources, a user needs to forward the SSH connection over a seperate SSH connection to the proxy node (so the connection becomes SSH-over-SSH).
jFed supports a manifest extension to describe this scenario, and will setup the SSH-over-SSH connection transparently for the users. More information about this manifest extension can be found here: https://fed4fire-testbeds.ilabt.iminds.be/asciidoc/rspec.html#_ssh_proxy_manifest_rspec_extension
Note: jFed also offers SSH-over-SSH using the central jFed proxy. This is needed for many users to escape local firewalls. SSH-over-SSH-over-SSH is currently not supported, so testbed specific SSH proxies cannot be used in combination with the central jFed proxy. (It might be possible for users to set this up manually on linux.)
Special case: Indirect testbed resource access¶
Not all testbeds offer resources that can be loggin in to using SSH (directly or through a proxy). Some testbeds require you to login (typically using SSH) to a host seperate from the requested resources, on which commands can be executed to manage and use the testbed resources. We will call this host the “management node” in the rest of this section.
There are a few options here:
- It might be required by users to explicitly request a management node in the request RSpec.
- The testbed might have a dedicated shared management host, which all users share. In this case, users will not specify the management node in the request RSpec. The testbed will add the info about the management node in the manifest RSpec. The testbed will needs to manage users accounts and SSH access to them to allow users to login to the management node.
- The testbed might automatically add a management node to each experiment. This case is similar to the case with a dedicated shared management host: it is not in request RSpec, but it is added to the manifest RSpec. However, in this case, an exclusive management host is setup for each experiment. Often, a VM or docker container is used as management node in this scenario.
On some testbeds, a dedicated shared management host or a exclusive management host is also used as SSH proxy to reach the nodes. This makes sense if it is required to somehow initialise the requested nodes from the management nodes, but once they are ready, SSH login is possible.
Note that in all these scenario’s, it is very important to have good documentation available for the experimenters.
Advertisement RSpec extension: Hardware information¶
jFed supports an advertisement RSpec extension to specify info about each hardware type. This is used to show more information about hardware types in the jFed experimenter GUI.
Specification and Grid5000 example: https://grid5000.gitlabpages.inria.fr/gcf-grid5000-plugin/hwinfo.html
Wall2 example:
<hardware_type_info xmlns="https://doc.fed4fire.eu/rspec/ext/hwinfo/1">
<overview media-type="text/plain">Test</overview>
<overview media-type="text/html" href="https://doc.ilabt.imec.be/ilabt-documentation/virtualwallfacility.html#virtual-wall-2"/>
<hardware_type name="pcgen03-1p" hrn="Generation 3 (1 iface)">
<info media-type="text/plain">2x Hexacore Intel E5645 (2.4GHz) CPU, 24GB RAM, 1x 250GB harddisk, 1 gigabit nic</info>
</hardware_type>
...
</hardware_type_info>