I'm trying to deploy a CPU endpoint with runpodctl serverless create --compute-type CPU and it creates a GPU endpoint. I'm attaching an example below:
$ runpodctl serverless create --name "serverless-cpu" --template-id "xxxxxxxxx" --compute-type "CPU" --workers-max 1
{
"gpuCount": 1,
...,
"idleTimeout": 10,
"name": "serverless-cpu -fb",
"scalerType": "QUEUE_DELAY",
"scalerValue": 4,
"template": {
"category": "NVIDIA",
"config": {
"templateId": "xxxxxxxxx"
},
"containerDiskInGb": 20,
"containerRegistryAuthId": "",
"id": "xxxxxxxxxxxx",
"isServerless": true,
"name": "cpi-tmpl",
"ports": [
"8888/http",
"22/tcp"
],
"readme": "",
"startJupyter": true,
"startSsh": true
},
"templateId": "xxxxxxxx",
"workersMax": 1
}
I'm trying to deploy a CPU endpoint with
runpodctl serverless create --compute-type CPUand it creates a GPU endpoint. I'm attaching an example below: