Project

General

Profile

Actions

Feature #16066

open

Request for scalable compute profile concept

Added by Ondřej Pražák over 7 years ago. Updated over 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Compute resources
Target version:
-
Difficulty:
Triaged:
Fixed in Releases:
Found in Releases:

Description

Cloned from https://bugzilla.redhat.com/show_bug.cgi?id=1265724
1. Proposed title of this feature request
Scalable compute profile concept

3. What is the nature and description of the request?
The current concept of compute profiles does not scale when you have hundreds of different applications running on many of different compute resources and versions of RHEL.

4. Why does the customer need this? (List the business requirements here)
Customer has approximately 200 different applications running on two or three different versions of our SoE. Customer has approximately 15 different VMware compute resources. To deploy these applications automatically customer then require one compute profile for each application and environment and for each SOE version. That gives customer over 10 000 compute profiles. Which is almost impossible to maintain.

5. How would the customer like to achieve this? (List the functional requirements here)
Suggestions for solutions: * Compute profiles that inherits information from other compute profiles.
Short term, possibly easiest. There would be a way to define parts in a compute profile and then have compute profiles to inherit information from other compute profiles. * Provisioning templates for compute profiles
Long term. You have the ability to script using parameters available in a regular provisioning template to define how a compute profile should look like. Imagine something like a provisioning template, or exactly that, but where you have the ability to set compute resource parameters.

6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
N/A

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?
No

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?

2015-11-30 customer aims to have all standards and servers (~4000) migrated into Satellite 6.1. Customer will then hit this issues, possibly causing them to have to halt automation of server and application deployment, depending if it's even possible to create thousands of compute profiles..

9. Is the sales team involved in this request and do they have any additional input?
No

10. List any affected packages or components.
N/A

11. Would the customer be able to assist in testing this functionality if implemented?
Yes

Actions #1

Updated by Dominic Cleal over 7 years ago

  • Category changed from VM management to Compute resources
Actions #2

Updated by Ondřej Pražák over 7 years ago

  • Target version set to 115

Additional info:

An example of a compute profile (for VMware, which we use) sets the following things. I've made notes to the different settings uniqueness.

Compute profile name: xyz
Compute resource: VSphere server xyz (One per datacenter)
CPUs: 1 (Unique for each application)
Memory (MB): 1234 (Unique for each application)
Cluster: XYZ (5-10 clusters per datacenter)
Resource pool: XYZ (Unique for each cluster and also depends on if it's a test or production system)
Folder: Linux / [Dev/Test/Prod] (One folder per environment)
Guest OS: Red Hat Enterprise Linux 123 (Unique for each major release of RHEL)
Virtual H/W version: (Is today set to default)

Network Interfaces:
NIC type: e1000/vmxnet3 (Depends on if it's RHEL5+6 or 7)
Network: VLAN123 (Unique for each application)
(There are two network interfaces or more defined in each profile)
NIC type: e1000/vmxnet3 (Depends on if it's RHEL5+6 or 7)
Network: VLAN321 (Unique for each application)

Storage:
SCSI controller: (Today the same for all profiles)
Data store: Depends on application, there are 40-50 datastores that are valid selections for each appplication.
Name: Default name
Size: 123 (Unique for each application)
(There may be multiple hard disks defined in a compute profile)
SCSI controller: (Today the same for all profiles)
Data store: Depends on application, there are 40-50 datastores that are valid selections for each appplication.
Size: 321 (Unique for each application)

Image: VMware-template-xyz (Depends on what version of RHEL and possibly also what application)

  1. Below is number of settings required to be set if we can allow a compute profile for an application to inherit settings from other compute profiles (imagine one default compute profile for each version of our Linux standard, one for each compute environment and one for each major platform configuration):

Compute profile name: xyz
CPUs: 1 (Unique for each application)
Memory (MB): 1234 (Unique for each application)
Cluster: XYZ (5-10 clusters per datacenter)
Resource pool: XYZ (Has to be set as it's unique for each datacenter, which is unique per cluster and environment)

Network Interfaces:
Network: VLAN123 (Unique for each application)
(There are two network interfaces or more defined in each profile)
Network: VLAN321 (Unique for each application)

Storage:
Size: 123 (Unique for each application)
(There may be multiple hard disks defined in a compute profile)
Size: 321 (Unique for each application)

  1. Now would it be possible to programmatically describe a compute profile in a provisioning template, I could do something like:

case $(hostname -d) in
xxx)
Compute resource: xyz
;;
zzz)
Compute resource: zyx
;;
esac

case $(hostname -f) in
pattern)
Compute resource: yyy
;;
esac

case $(getAppType($(hostname -f)))) in
pattern1)
CPUs: 1
Memory (MB): 1234
Cluster: XYZ
Resource pool: XYZ
Image: VMware-template-xyz
;;
pattern2)
CPUs: 2
Memory (MB): 4321
Cluster: ZYX
Resource pool: ZYX
Image: VMware-template-zyx
;;
esac

case $(hostname -f) in
pattern)
Folder: Linux / [Dev/Test/Prod]
;;
esac

case $(majorOS_version) in
5)
Guest OS: Red Hat Enterprise Linux 5 64 bit
;;
6)
Guest OS: Red Hat Enterprise Linux 6 64 bit
;;
7)
Guest OS: Red Hat Enterprise Linux 7 64 bit
;;
esac

Additional note on information needed to be set per application if compute profiles can inherit information from other compute profiles. In +50% of the cases we would not have to set any storage information, meaning the only unique things would then be, we could also use "SMALL/MEDIUM/LARGE" profiles for compute resources such as CPU/Memory, leaving us with only 5 values to set manually for each applications compute profile, instead of as it is today, where each application profile has to define 16-17 values.

Compute profile name: xyz
Cluster: XYZ (5-10 clusters per datacenter)
Resource pool: XYZ (Has to be set as it's unique for each datacenter, which is unique per cluster and environment)

Network Interfaces:
Network: VLAN123 (Unique for each application)
(There are two network interfaces or more defined in each profile)
Network: VLAN321 (Unique for each application)

Now having scaled to over 100 compute profiles I'd like to add the ability to create and get information on compute profiles using API/CLI.

Partly to save time, partly to improve compute profile quality while also enabling other applications (orchestration tools) to create compute profiles that are then used in Satellite 6 (enabling you to provision something via an Orchestration tool while retaining the ability to easily provision that same system using Satellite 6).

I realize this may require a caching mechanism for compute resources, but that is something that will have to be addressed later anyways (I think). A caching mechanism wouldn't affect reliability much, as no realtime sanity checking is done when deploying servers using the Satellite 6 API/CLI.

Actions

Also available in: Atom PDF