Discussion:
[c-nsp] ASR9k QoS Scale/Usage Query
Robert Williams
2017-03-13 08:51:49 UTC
Permalink
Hi All,

We run reasonably large QoS policies on various 9001 and 90xx chassis (SE and TR mix but all with Typhoon cards) and I’m looking for a way to query the ‘remaining capacity’ for QoS scale/growth. I’ve found various documentation which loosely describes the different limitations for QoS scale, but nothing which seems to allow a query of the available hardware capacity or current usage levels. The nature of the policies we use (on BVIs, bundles, sub-ifs, parent-child etc.) and the distributed NPs etc. (in conjunction with vague documentation) is making it difficult to gauge how much growth we have remaining in real terms. I want to avoid hitting an unexpected wall in the future.

Does anyone know if such a command/query still exists? Something like the old “show platform hardware capacity xxxx” gives on the Cats would be ideal, but anything remotely related would be appreciated.

Cheers!


Robert Williams
Custodian Data Centre
Email: ***@CustodianDC.com
http://www.CustodianDC.com

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http:/
R Maha
2017-03-14 05:34:27 UTC
Permalink
Hi Robert,

These sites would be helpful to you on understanding QoS.

https://supportforums.cisco.com/document/59901/asr9000xr-understanding-qos-default-marking-behavior-and-troubleshooting
https://null.53bits.co.uk/index.php?page=asr9000-lag
http://www.alcatron.net/Cisco%20Live%202014%20Melbourne/Cisco%20Live%20Content/Service%20Provider/BRKSPG-2904%20%20ASR-9000%20IOS-XR%20Hardware%20Architecture,%20QOS,%20EVC,%20IOS-XR%20Configuration%20and%20Troubleshooting.pdf

Are you referring to this command:

show qoshal resource summary [np <np>]

Displays the summary of all the resources used in hardware and
software for QoS such number of policy instances, queues,
profiles


Regards,
Rajendra
Post by Robert Williams
Hi All,
We run reasonably large QoS policies on various 9001 and 90xx chassis (SE
and TR mix but all with Typhoon cards) and I’m looking for a way to query
the ‘remaining capacity’ for QoS scale/growth. I’ve found various
documentation which loosely describes the different limitations for QoS
scale, but nothing which seems to allow a query of the available hardware
capacity or current usage levels. The nature of the policies we use (on
BVIs, bundles, sub-ifs, parent-child etc.) and the distributed NPs etc. (in
conjunction with vague documentation) is making it difficult to gauge how
much growth we have remaining in real terms. I want to avoid hitting an
unexpected wall in the future.
Does anyone know if such a command/query still exists? Something like the
old “show platform hardware capacity xxxx” gives on the Cats would be
ideal, but anything remotely related would be appreciated.
Cheers!
Robert Williams
Custodian Data Centre
http://www.CustodianDC.com
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at ht
Robert Williams
2017-03-14 08:16:04 UTC
Permalink
Hi Rajendra,
Post by R Maha
These sites would be helpful to you on understanding QoS.
https://supportforums.cisco.com/document/59901/asr9000xr-understanding-qos-default-marking-behavior-and-troubleshooting
https://null.53bits.co.uk/index.php?page=asr9000-lag
http://www.alcatron.net/Cisco%20Live%202014%20Melbourne/Cisco%20Live%20Content/Service%20Provider/BRKSPG-2904%20%20ASR-9000%20IOS-XR%20Hardware%20Architecture,%20QOS,%20EVC,%20IOS-XR%20Configuration%20and%20Troubleshooting.pdf
show qoshal resource summary [np <np>]
Thanks for that, I was familiar with that command but I appear to be lacking some required information in order to use the output from it to get what I am looking for.

At a high level – I essentially need to know (or be able to calculate) what percentage* of hardware resources are being consumed by the current policies being used.

*I appreciate that this is going to be a figure ‘per NP’ and/or ‘per LC’ but regardless I need to know so that we can plan expansion.

The command you shared specifically, gives output along these lines (on a random LC here):

<snip>
SUMMARY per NP:
=========================
Policy Instances: Ingress 37 Egress 14 Total: 51
Entities: (L4 level: Queues)
Level Chunk 0 Chunk 1 Chunk 2 Chunk 3
L4 78( 78/ 78) 14( 14/ 14) 22( 22/ 22) 20( 20/ 20)
L3(8Q) 19( 19/ 19) 3( 3/ 3) 6( 6/ 6) 6( 6/ 6)
L3(16Q) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
L2 7( 7/ 7) 2( 2/ 2) 4( 4/ 4) 4( 4/ 4)
L1 16( 16/ 16) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
Groups:
Level Chunk 0 Chunk 1 Chunk 2 Chunk 3
L4 19( 19/ 19) 3( 3/ 3) 6( 6/ 6) 6( 6/ 6)
L3(8Q) 9( 9/ 9) 2( 2/ 2) 4( 4/ 4) 4( 4/ 4)
L3(16Q) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
L2 7( 7/ 7) 2( 2/ 2) 4( 4/ 4) 4( 4/ 4)
L1 16( 16/ 16) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
Policers: Internal 658(658) Regular 252(252) Parent 0(0) Child 0(0) Total 910(910)

PROFILES:
WFQ:
Level Chunk 0 Chunk 1 Chunk 2 Chunk 3
L4 254( 254/ 78) 254( 254/ 14) 254( 254/ 22) 254( 254/ 20)
L3 256( 256/ 19) 256( 256/ 3) 256( 256/ 6) 256( 256/ 6)
L2 256( 256/ 7) 256( 256/ 2) 256( 256/ 4) 256( 256/ 4)
L1 64( 64/ 12) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
<snip>

However, I have no reference point for what the ‘maximum’ would be for any of those values (or more specifically, I’m not aware what the maximums are).

Take this output for example:

#show qos capability location 0/0/cpu0
Tue Mar 14 08:02:14.088 GMT
Capability Information:
======================
<snip>
Max Policy maps supported on this LC: 16384
Max classes per child-policy: 1024
Max classes per policy: 1024
<snip>

It shows the limitations of this LC which is good, but I cannot find where to get the ‘current usage’ in the same context as the maximums listed here.

So something like ‘number of policy maps currently active on the LC’ or ‘number of classes per policy’. Without actually walking through the configurations on all ports and cards manually of course.

I’m reasonably familiar with the actual operation of QoS on the chassis (and in general) – in reality it is the complexity and size of our QoS structure which has led to the need for us to accurately quantify the hardware usage levels.

If you have any additional input I’d very much appreciate it!

Best wishes & thanks,


Robert Williams
Custodian Data Centre
Email: ***@CustodianDC.com
http://www.CustodianDC.com




_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at
R Maha
2017-03-14 11:17:39 UTC
Permalink
Hi Robert,

In my case, below output is giving me the number of policers utilised:

*SUMMARY per NP:*
* =========================*
* Policy Instances: Ingress 7357 Egress 7336 Total: 14693*

* CLIENT : QoS-EA*
* Policy Instances: Ingress 7324 Egress 7314 Total: 14638*

By means of your output, it seems it is using 37 Ingress and 14 Egress as
total of 51.

<snip>
SUMMARY per NP:
=========================
Policy Instances: Ingress 37 Egress 14 Total: 51

Please confirm if that is true.


Regards,
Rajendra
Post by Robert Williams
Hi Rajendra,
Post by R Maha
These sites would be helpful to you on understanding QoS.
*https://supportforums.cisco.com/document/59901/asr9000xr-understanding-qos-default-marking-behavior-and-troubleshooting*
<https://supportforums.cisco.com/document/59901/asr9000xr-understanding-qos-default-marking-behavior-and-troubleshooting>
Post by R Maha
*https://null.53bits.co.uk/index.php?page=asr9000-lag*
<https://null.53bits.co.uk/index.php?page=asr9000-lag>
*http://www.alcatron.net/Cisco%20Live%202014%20Melbourne/Cisco%20Live%20Content/Service%20Provider/BRKSPG-2904%20%20ASR-9000%20IOS-XR%20Hardware%20Architecture,%20QOS,%20EVC,%20IOS-XR%20Configuration%20and%20Troubleshooting.pdf*
<http://www.alcatron.net/Cisco%20Live%202014%20Melbourne/Cisco%20Live%20Content/Service%20Provider/BRKSPG-2904%20%20ASR-9000%20IOS-XR%20Hardware%20Architecture,%20QOS,%20EVC,%20IOS-XR%20Configuration%20and%20Troubleshooting.pdf>
Post by R Maha
show qoshal resource summary [np <np>]
Thanks for that, I was familiar with that command but I appear to be
lacking some required information in order to use the output from it to get
what I am looking for.
At a high level – I essentially need to know (or be able to calculate)
what percentage* of hardware resources are being consumed by the current
policies being used.
****I appreciate that this is **going** to be a figure ‘per NP’ and**/or**
‘per LC’ but regardless I need to know so that we can plan expansion.*
<snip>
=========================
Policy Instances: Ingress 37 Egress 14 Total: 51
Entities: (L4 level: Queues)
Level Chunk 0 Chunk 1 Chunk 2
Chunk 3
L4 78( 78/ 78) 14( 14/ 14) 22( 22/ 22) 20(
20/ 20)
L3(8Q) 19( 19/ 19) 3( 3/ 3) 6( 6/ 6)
6( 6/ 6)
L3(16Q) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
0( 0/ 0)
L2 7( 7/ 7) 2( 2/ 2) 4( 4/ 4)
4( 4/ 4)
L1 16( 16/ 16) 0( 0/ 0) 0( 0/ 0)
0( 0/ 0)
Level Chunk 0 Chunk 1 Chunk 2
Chunk 3
L4 19( 19/ 19) 3( 3/ 3) 6( 6/ 6)
6( 6/ 6)
L3(8Q) 9( 9/ 9) 2( 2/ 2) 4( 4/ 4)
4( 4/ 4)
L3(16Q) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0)
0( 0/ 0)
L2 7( 7/ 7) 2( 2/ 2) 4( 4/ 4)
4( 4/ 4)
L1 16( 16/ 16) 0( 0/ 0) 0( 0/ 0)
0( 0/ 0)
Policers: Internal 658(658) Regular 252(252) Parent 0(0) Child 0(0) Total 910(910)
Level Chunk 0 Chunk 1 Chunk 2 Chunk 3
L4 254( 254/ 78) 254( 254/ 14) 254( 254/ 22) 254( 254/ 20)
L3 256( 256/ 19) 256( 256/ 3) 256( 256/ 6) 256( 256/ 6)
L2 256( 256/ 7) 256( 256/ 2) 256( 256/ 4) 256( 256/ 4)
L1 64( 64/ 12) 0( 0/ 0) 0( 0/ 0) 0(
0/ 0)
<snip>
However, I have no reference point for what the ‘maximum’ would be for any
of those values (or more specifically, I’m not aware what the maximums are).
#show qos capability location 0/0/cpu0
Tue Mar 14 08:02:14.088 GMT
======================
<snip>
Max Policy maps supported on this LC: 16384
Max classes per child-policy: 1024
Max classes per policy: 1024
<snip>
It shows the limitations of this LC which is good, but I cannot find where
to get the ‘current usage’ in the same context as the maximums listed here.
So something like ‘number of policy maps currently active on the LC’ or
‘number of classes per policy’. Without actually walking through the
configurations on all ports and cards manually of course.
I’m reasonably familiar with the actual operation of QoS on the chassis
(and in general) – in reality it is the complexity and size of our QoS
structure which has led to the need for us to accurately quantify the
hardware usage levels.
If you have any additional input I’d very much appreciate it!
Best wishes & thanks,
Robert Williams
Custodian Data Centre
http://www.CustodianDC.com
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipe
Robert Williams
2017-03-14 11:59:51 UTC
Permalink
Hi Rajendra,
Post by R Maha
Please confirm if that is true.
Yes that’s correct, low numbers on my side were due to that being a lab chassis, I'm seeing hundreds (although not thousands like yours) on some of our production chassis.

I guess what I’m really looking for is a means to correlate those figures to actual ‘limits’ in the hardware itself.

Best wishes & thanks,




Robert Williams
Custodian Data Centre
Email: ***@CustodianDC.com
http://www.CustodianDC.com

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.
Rimestad, Steinar
2017-03-16 12:29:30 UTC
Permalink
Hi Robert.

The ASR9k Typoon –TR cards have 8 queues per port (chunk) on the NP, 32k policers per NP.
-SE cards have 192K egress and 64K ingress (256K total) queues per NP and a 64K queue limit per chunk, 256K policers per NP.

Bear in mind if you are using BNG with queues/shapers then each subscriber will utilize 8 queues, counted under the QoSHAL output as L3(8Q) under ”Entities”. If you are using bundles then the QoS will be replicated on every member of the bundle across all linecards as well.

/ Steinar



On 14/03/17 12:59, "cisco-nsp on behalf of Robert Williams" <cisco-nsp-***@puck.nether.net on behalf of ***@CustodianDC.com> wrote:

Hi Rajendra,
Post by R Maha
Please confirm if that is true.
Yes that’s correct, low numbers on my side were due to that being a lab chassis, I'm seeing hundreds (although not thousands like yours) on some of our production chassis.

I guess what I’m really looking for is a means to correlate those figures to actual ‘limits’ in the hardware itself.

Best wishes & thanks,




Robert Williams
Custodian Data Centre
Email: ***@CustodianDC.com
http://www.CustodianDC.com

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://p
Robert Williams
2017-03-16 12:45:23 UTC
Permalink
Hi Steinar,

Thanks for that - I believe we are ok on the policers and queues front (no BNG but many multi-card bundles, and I'm aware of the whole replication to all NPs involved in the bundle element) - my main concern was the size and quantity of the ACLs we are using. On other platforms this would have consumed a lot of TCAM (all different ACLs, unique address ranges etc.).

Some policies have 300+ classes and each ACL has between 1 and 15 entries in it. I appreciate there is a limit on the number of classes (1024) but I cannot see anything which suggests how large the ACLs can be or how much space is being consumed by the ones we are using.

So whilst we are within the 'class' limit of 1024, I'm concerned how much resource is being used by the sheer quantity and size of ACLs being used by the 300+ class-maps and can find no way to quantify this usage level. There must be a limit and I need to know if we are 1% or 80% of the way there.

Best wishes,



Robert Williams
Custodian Data Centre
Email: ***@CustodianDC.com
http://www.CustodianDC.com

-----Original Message-----
From: Rimestad, Steinar [mailto:***@altibox.no]
Sent: 16 March 2017 12:30
To: Robert Williams <***@CustodianDC.com>
Cc: cisco-***@puck.nether.net
Subject: Re: [c-nsp] ASR9k QoS Scale/Usage Query

Hi Robert.

The ASR9k Typoon –TR cards have 8 queues per port (chunk) on the NP, 32k policers per NP.
-SE cards have 192K egress and 64K ingress (256K total) queues per NP and a 64K queue limit per chunk, 256K policers per NP.

Bear in mind if you are using BNG with queues/shapers then each subscriber will utilize 8 queues, counted under the QoSHAL output as L3(8Q) under ”Entities”. If you are using bundles then the QoS will be replicated on every member of the bundle across all linecards as well.

/ Steinar



On 14/03/17 12:59, "cisco-nsp on behalf of Robert Williams" <cisco-nsp-***@puck.nether.net on behalf of ***@CustodianDC.com> wrote:

Hi Rajendra,
Post by R Maha
Please confirm if that is true.
Yes that’s correct, low numbers on my side were due to that being a lab chassis, I'm seeing hundreds (although not thousands like yours) on some of our production chassis.

I guess what I’m really looking for is a means to correlate those figures to actual ‘limits’ in the hardware itself.

Best wishes & thanks,




Robert Williams
Custodian Data Centre
Email: ***@CustodianDC.com
http://www.CustodianDC.com

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pi

Loading...