Discussion:
[c-nsp] Juniper MX240 & MX480
Mark Mason
2017-10-25 14:54:33 UTC
Permalink
Can someone educate me on the Juniper MX240 and MX480 chassis; I am not Juniper savvy but we are gaining education quickly! It 'seems' like the MX240 and MX480 chassis have been around quite sometime. If you had brand new purchase on the horizon, would this thought concern you? Would you be worried about a chassis that is pretty long in the tooth? Correct me if I'm wrong but conceptually it feels like a 6500 chassis that has been around a LONG time, but supervisor cards have been updated and line cards have been updated, but the backplane of this thing is old. Please correct my thinking and/or add your thoughts! I greatly appreciate your time in this matter.

NOTICE: This electronic mail message and any files transmitted with it are intended
exclusively for the individual or entity to which it is addressed. The message,
together with any attachment, may contain confidential and/or privileged information.
Any unauthorized review, use, printing, saving, copying, disclosure or distribution
is strictly prohibited. If you have received this message in error, please
immediately advise the sender by reply email and delete all copies.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Sebastian Becker
2017-10-25 22:26:32 UTC
Permalink
I can but I think this is the wrong list.


Sebastian Becker
Post by Mark Mason
Can someone educate me on the Juniper MX240 and MX480 chassis; I am not Juniper savvy but we are gaining education quickly! It 'seems' like the MX240 and MX480 chassis have been around quite sometime. If you had brand new purchase on the horizon, would this thought concern you? Would you be worried about a chassis that is pretty long in the tooth? Correct me if I'm wrong but conceptually it feels like a 6500 chassis that has been around a LONG time, but supervisor cards have been updated and line cards have been updated, but the backplane of this thing is old. Please correct my thinking and/or add your thoughts! I greatly appreciate your time in this matter.
NOTICE: This electronic mail message and any files transmitted with it are intended
exclusively for the individual or entity to which it is addressed. The message,
together with any attachment, may contain confidential and/or privileged information.
Any unauthorized review, use, printing, saving, copying, disclosure or distribution
is strictly prohibited. If you have received this message in error, please
immediately advise the sender by reply email and delete all copies.
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/
Youssef Bengelloun-Zahr
2017-10-26 09:18:13 UTC
Permalink
Hi,

There is a panned webinar today to present the new model :


"Register now for our MX150 webinar
<https://channelnewsondemand.juniper.net/emailredir/?eid=46&tid=73&u=8A0B4CEA1A1165FE6BBDC8E7011B38&pr=0&url=https%3A%2F%2Fjpartnertraining.juniper.net%2Fevent%2Fview%2F5a577bea-273a-4001-b970-a25d78ed9bd2>
on Wednesday, October 26, at 8 am PT/4 pm UK/5 pm CET.

You will learn all about our new MX150 3D Universal Edge Router—a
high-performance 40G router powered by vMX software running on a compact,
1RU x86-based platform. It provides T2/T3 service providers and enterprises
with a solution that offers advanced services with high control plane scale
and performance.

The MX150 offers feature and operations consistency with the vMX; it runs
the same Junos that powers the entire Juniper portfolio and uses
vTrio–programmable Trio chipset microcode optimized for execution in x86
environments on the forwarding plane.

This new addition to the MX Series family addresses the need for routers
that combine low bandwidth while supporting service providers and
enterprises with advanced routing capabilities. Supported routing features
include EVPN and VXLAN, plus Carrier Ethernet grade services such as CGNAT,
SFW and DPI. Both 1GbE as well as 10GbE port speeds are available.

By joining this webinar, you will:

- Find out where the MX150 fits in the MX portfolio
- Get a detailed view of the control plane performance and hardware
specifications
- Find out how to position the MX150 for the supported use cases
- Learn how the MX150 is priced and ordered

Please register now
<https://channelnewsondemand.juniper.net/emailredir/?eid=46&tid=73&u=8A0B4CEA1A1165FE6BBDC8E7011B38&pr=0&url=https%3A%2F%2Fjpartnertraining.juniper.net%2Fevent%2Fview%2F5a577bea-273a-4001-b970-a25d78ed9bd2>;
we look forward to your participation on the 26th."

Best regards.
Hi,
Post by Sebastian Becker
Post by Mark Mason
Can someone educate me on the Juniper MX240 and MX480 chassis
I can but I think this is the wrong list.
Indeed.
https://puck.nether.net/mailman/listinfo/juniper-nsp
In related news, I see that Juniper has announced the MX150 ("vTrio" which
I assume means vMX on x86) and MX204 -- both 1RU. It's about time :-)
cheers,
Dale
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck
Aaron Gould
2017-10-26 11:53:32 UTC
Permalink
Dang, MX204 has possible (4) 100 gig interfaces ... 1RU ! (I heard
something about juniper summit or vale a while back...maybe that's these 150
and 204)

https://www.juniper.net/us/en/products-services/routing/mx-series/compare?p=
MX204

Someone is already using them, guessing a facebook fna caching site...
http://new.commverge.com/Announcements/tabid/83/EntryId/176/CommVerge-Hong-K
ong-deploys-Juniper-MX-204-Routing-Switch-in-Facebook-Hong-Kong-Site.aspx

I read something about MPLSoUDP , VXLAN , EVPN, SR-MPLS and SR-V6... seems
like it does newer stuff.

Yeah, this is the wrong list... hey, y'all started it , lol

-Aaron



_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-10-31 05:56:37 UTC
Permalink
Indeed.
https://puck.nether.net/mailman/listinfo/juniper-nsp
In related news, I see that Juniper has announced the MX150 ("vTrio" which
I assume means vMX on x86) and MX204 -- both 1RU. It's about time :-)
Don't forget about the MX10003.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Saku Ytti
2017-10-31 08:48:01 UTC
Permalink
Post by Mark Tinka
In related news, I see that Juniper has announced the MX150 ("vTrio" which
I assume means vMX on x86) and MX204 -- both 1RU. It's about time :-)
Don't forget about the MX10003.
Am I only one puzzled about MX204 port choice, 4xQSFP28 + 8xSFP+.
Seems like it's positioned to datacenters facing upstream? I'd want
2xQSFP28 and maybe 36xSFP+ (oversub is fine), with attractive
licensing using SFP+ as SFP only, to add L3 DFZ 1GE aggregation box to
JNPR portfolio.
I can't imagine rolling this would be expensive, call it MX202 o
something. JNPR do me a solid.

(resent from correct mailbox)
--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
s***@nethelp.no
2017-10-31 10:08:05 UTC
Permalink
Post by Saku Ytti
Am I only one puzzled about MX204 port choice, 4xQSFP28 + 8xSFP+.
Seems like it's positioned to datacenters facing upstream? I'd want
2xQSFP28 and maybe 36xSFP+ (oversub is fine), with attractive
licensing using SFP+ as SFP only, to add L3 DFZ 1GE aggregation box to
JNPR portfolio.
I can't imagine rolling this would be expensive, call it MX202 o
something. JNPR do me a solid.
We discussed this with Juniper. We're hearing a lot about space available
on the faceplate versus number of ports desired. However, the MX204 feels
mostly irrelevant for us because splitter cables for 10G are not usable
(since we have quite a bit CWDM/DWDM optics directly in our routers).

If faceplate space is the issue we would actually be much happier with a
2RU box - especially if they could reduce depth to 30 cm or thereabouts.

Steinar Haug, Nethelp consulting, ***@nethelp.no
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Chris Welti
2017-10-31 10:52:11 UTC
Permalink
Regarding CWDM/DWDM, you could always add a QFX5110-48SH as a port extender box to the MX204 with Junos Fusion Provider Edge and sacrifice one or two 100G QSFP28 ports on the MX204.
That way you'd have 2x100G and 48x 1/10G SFP+ ports with a bit of oversubscription in 2RU.
Does anyone know if you can use the onboard 8x SFP+ ports on the MX204 in case you use all four QSFP28 ports in 100G mode? (With a bit of oversubscription?)

Btw, pricing per 100G on the MX204 and MX10003 seems pretty good compared to the other MX boxes and ASR9K.

Regards,
Chris
Post by s***@nethelp.no
Post by Saku Ytti
Am I only one puzzled about MX204 port choice, 4xQSFP28 + 8xSFP+.
Seems like it's positioned to datacenters facing upstream? I'd want
2xQSFP28 and maybe 36xSFP+ (oversub is fine), with attractive
licensing using SFP+ as SFP only, to add L3 DFZ 1GE aggregation box to
JNPR portfolio.
I can't imagine rolling this would be expensive, call it MX202 o
something. JNPR do me a solid.
We discussed this with Juniper. We're hearing a lot about space available
on the faceplate versus number of ports desired. However, the MX204 feels
mostly irrelevant for us because splitter cables for 10G are not usable
(since we have quite a bit CWDM/DWDM optics directly in our routers).
If faceplate space is the issue we would actually be much happier with a
2RU box - especially if they could reduce depth to 30 cm or thereabouts.
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-11-01 09:09:17 UTC
Permalink
Post by Saku Ytti
Am I only one puzzled about MX204 port choice, 4xQSFP28 + 8xSFP+.
Seems like it's positioned to datacenters facing upstream? I'd want
2xQSFP28 and maybe 36xSFP+ (oversub is fine), with attractive
licensing using SFP+ as SFP only, to add L3 DFZ 1GE aggregation box to
JNPR portfolio.
I can't imagine rolling this would be expensive, call it MX202 o
something. JNPR do me a solid.
Still not exactly the Metro-E solution we've been longing for from
Juniper, but I do see a use-case where customers in the Metro want
10Gbps ports.

But that will be the exception, and not the rule. At least for us.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2017-10-31 13:33:39 UTC
Permalink
Mark Tinka
Sent: Tuesday, October 31, 2017 5:57 AM
Indeed.
https://puck.nether.net/mailman/listinfo/juniper-nsp
In related news, I see that Juniper has announced the MX150 ("vTrio"
which I assume means vMX on x86) and MX204 -- both 1RU. It's about
time :-)
Don't forget about the MX10003.
Interesting, seems like two mpc7 cards connected back to back, wondering how
they managed to connect 4 trio chips in a non-blocking fashion with no
crossbar.
Does anyone have any material on the platform internals please?


adam

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-10-31 14:40:56 UTC
Permalink
Post by a***@netconsultings.com
Interesting, seems like two mpc7 cards connected back to back, wondering how
they managed to connect 4 trio chips in a non-blocking fashion with no
crossbar.
Does anyone have any material on the platform internals please?
There is a crossbar.

Each MPC supports 1.2Tbps (delivered via 3x 3rd-Gen Trio chipsets in
each MPC) that handle the MIC's and interconnect to the crossbar.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2017-11-01 10:25:34 UTC
Permalink
Sent: Tuesday, October 31, 2017 2:41 PM
There is a crossbar.
The only thing I found about it was:
Switch fabric capacity per slot N/A
Each MPC supports 1.2Tbps (delivered via 3x 3rd-Gen Trio chipsets in each
MPC) that handle the MIC's and interconnect to the crossbar.
Hmm that doesn’t seem right,
Materials say it's 1.2T per chassis.
So 600G per card and 3rd-Gen (EA) Trio is rated at 240G, so 3 of them would give you max 720G.

adam


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether
Chris Welti
2017-11-01 10:54:55 UTC
Permalink
The 3rd-Gen (EA) Trio chip is actually rated at 480G, has been rate-limited to 240G in MPC7e and MPC8e linecards and
is currently rate-limited to 400G in the MPC9e cards and the 10003 MPC.
I have no idea why it is rate-limited but suspect thermal issues?
See the book Juniper MX Series, 2nd Edition by David Roy, Harry Reynolds, Douglas Richard Hanks

The documentation of the MX10003 MPC clearly says it's 1.2T per card:
https://www.juniper.net/documentation/en_US/junos/topics/concept/mpc10003-overview.html

System capacity of the MX10003 chassis is claimed to be 4.8T, so either there might be future MPC line cards with 2.4T
or they are counting capacity per direction ;)
https://www.juniper.net/us/en/products-services/routing/mx-series/mx10003/

Chris
Post by a***@netconsultings.com
Sent: Tuesday, October 31, 2017 2:41 PM
There is a crossbar.
Switch fabric capacity per slot N/A
Each MPC supports 1.2Tbps (delivered via 3x 3rd-Gen Trio chipsets in each
MPC) that handle the MIC's and interconnect to the crossbar.
Hmm that doesn’t seem right,
Materials say it's 1.2T per chassis.
So 600G per card and 3rd-Gen (EA) Trio is rated at 240G, so 3 of them would give you max 720G.
adam
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-ns
Mark Tinka
2017-11-01 14:34:55 UTC
Permalink
Post by Chris Welti
System capacity of the MX10003 chassis is claimed to be 4.8T, so either there might be future MPC line cards with 2.4T
or they are counting capacity per direction ;)
https://www.juniper.net/us/en/products-services/routing/mx-series/mx10003/
It's actually future-proofed up to 14.2Tbps/slot.

I'm loving this box for dedicated 40Gbps and 100Gbps edge ports for
customers. I've found that mixing-and-matching 1Gbps/10Gbps ports with
40Gbps/100Gbps ports in our MX480 is reasonably costly. So the MX10003
is a welcome addition.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2017-11-01 14:38:18 UTC
Permalink
Sent: Wednesday, November 01, 2017 10:55 AM
The 3rd-Gen (EA) Trio chip is actually rated at 480G, has been rate-limited to
240G in MPC7e and MPC8e linecards and is currently rate-limited to 400G in
the MPC9e cards and the 10003 MPC.
Sorry my bad, yes totally forgot about mpc9 cards which are using two EAs to get 800G per MIC.
But didn't know about the 480G peak actually, one learns something new every day.
Alright so 10003 MPC is actually using 3x the 400G version.

adam


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-11-01 14:52:05 UTC
Permalink
Post by a***@netconsultings.com
Alright so 10003 MPC is actually using 3x the 400G version.
Yep...

Mark.

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-11-01 14:32:57 UTC
Permalink
Post by a***@netconsultings.com
Hmm that doesn’t seem right,
Materials say it's 1.2T per chassis.
So 600G per card and 3rd-Gen (EA) Trio is rated at 240G, so 3 of them would give you max 720G.
Not on the internals I've been provided so far.

What I can add is the each slot has been built to scale up to 14.2Tbps.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archiv
a***@netconsultings.com
2017-10-26 08:26:11 UTC
Permalink
The selection of tool depends on the job to be done, and you haven't
provided any info on what you intend to use the boxes for so I can only
generalize.
If your network is carrying traffic of a single priority level or if it just
can't get congested then you'll be fine (well you'll still have to bear the
stupid BGP implementation in Junos)
If the above is not your case then save yourself a bunch of trouble and go
with ASR9k line instead.

adam

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Saku Ytti
2017-10-26 11:38:22 UTC
Permalink
This does not sound constructive to me. I know networks having less
problem with JunOS BGP than IOS-XR BGP. I know several network running
QoS in MX successfully. I am not saying IOS-XR is worse or better, I'm
saying this is subjective opinion based on anecdotes. Another
subjective opinion based on few anecdotes might be, run away from
ASR9k at all cost, review situation in 5 years time when others have
beta tested XRe and ezchip=>lightspeed migration is done.
Post by a***@netconsultings.com
The selection of tool depends on the job to be done, and you haven't
provided any info on what you intend to use the boxes for so I can only
generalize.
If your network is carrying traffic of a single priority level or if it just
can't get congested then you'll be fine (well you'll still have to bear the
stupid BGP implementation in Junos)
If the above is not your case then save yourself a bunch of trouble and go
with ASR9k line instead.
adam
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2017-10-26 11:55:39 UTC
Permalink
Regarding the QOS sorry my bad wasn't specific enough, I didn't mean link congestion I mean TRIO chip overload (BW or PPS wise).
Regarding BGP implementation yes agree that's my subjective opinion I just happen to work with both XR/JUNOS BGP and now have "high" expectations from Junos implementation.


adam
Sent: Thursday, October 26, 2017 12:38 PM
This does not sound constructive to me. I know networks having less
problem with JunOS BGP than IOS-XR BGP. I know several network running
QoS in MX successfully. I am not saying IOS-XR is worse or better, I'm saying
this is subjective opinion based on anecdotes. Another subjective opinion
based on few anecdotes might be, run away from ASR9k at all cost, review
situation in 5 years time when others have beta tested XRe and
ezchip=>lightspeed migration is done.
Post by a***@netconsultings.com
The selection of tool depends on the job to be done, and you haven't
provided any info on what you intend to use the boxes for so I can
only generalize.
If your network is carrying traffic of a single priority level or if
it just can't get congested then you'll be fine (well you'll still
have to bear the stupid BGP implementation in Junos) If the above is
not your case then save yourself a bunch of trouble and go with ASR9k
line instead.
adam
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
--
++ytti
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Daniel Verlouw
2017-10-27 13:56:36 UTC
Permalink
Hi Adam,
Post by a***@netconsultings.com
Regarding the QOS sorry my bad wasn't specific enough, I didn't mean link congestion I mean TRIO chip overload (BW or PPS wise).
Apparently this is something you are regularly having problems with,
otherwise you wouldn't keep bringing this up over and over again. Is
this really such a big deal for you? In an artificial environment,
sure, but in production? I don't know your network, nor your business
objectives, but I'd say something is broken in your architecture,
design and/or capacity planning rules if you are constantly battling
TRIO chip congestion. Please educate us.

--Daniel.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Aaron Gould
2017-10-26 11:59:15 UTC
Permalink
The thing that caused me to evaluate replacing my ASR9k 15-node network was
when Cisco told me if I replaced my RSP-4G routing engine with newest one,
all my 1st gen Trident linecards would stop working. :|

So since I had to fork-lift everything , I thought it was time to re-eval
what is out there.

We needed CGNAT also.

We decided to go with MXX960's with MS-MPC's in them. MPC-7E linecards with
QSFP28 interfaces for building a 100 gig mpls core

I liked the Juniper CGNAT better than the Cisco ASR9000 VSM-500

- Aaron



_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Rolf Hanßen
2017-10-26 21:51:10 UTC
Permalink
Hello Aaron,

that's not a Cisco-only "feature".
You could also move from MX to new ASR boxes because Juniper told you that
your old DPC cards do not work if you replace your RE-S-2000 with the
newest RE (RE-S-X6-64G + SCBE2). ;)

kind regards
Rolf
Post by Aaron Gould
The thing that caused me to evaluate replacing my ASR9k 15-node network was
when Cisco told me if I replaced my RSP-4G routing engine with newest one,
all my 1st gen Trident linecards would stop working. :|
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Misak Khachatryan
2017-10-27 08:50:42 UTC
Permalink
Hi,

it is strange, because RE doesn't do much with line cards, maybe it
depends what kind SCB you have ...

Best regards,
Misak Khachatryan,
Network Administration and
Monitoring Department Manager,

GNC- ALFA CJSC
1 Khaghaghutyan str., Abovyan, 2201 Armenia
Tel: +374 60 46 99 70 (9670),
Mob.: +374 55 19 98 40
URL: www.rtarmenia.am
Post by Rolf Hanßen
Hello Aaron,
that's not a Cisco-only "feature".
You could also move from MX to new ASR boxes because Juniper told you that
your old DPC cards do not work if you replace your RE-S-2000 with the
newest RE (RE-S-X6-64G + SCBE2). ;)
kind regards
Rolf
Post by Aaron Gould
The thing that caused me to evaluate replacing my ASR9k 15-node network was
when Cisco told me if I replaced my RSP-4G routing engine with newest one,
all my 1st gen Trident linecards would stop working. :|
_______________________________________________
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive a
Rolf Hanßen
2017-10-27 11:23:59 UTC
Permalink
Hi,

RE-S-X6-64G requires SCBE2.
SCBE2 does not work with DPCs.
So you cannot upgrade to newest RE with old linecards.

kind regards
Rolf
Post by Misak Khachatryan
Hi,
it is strange, because RE doesn't do much with line cards, maybe it
depends what kind SCB you have ...
Best regards,
Misak Khachatryan,
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-10-31 05:58:37 UTC
Permalink
Post by a***@netconsultings.com
The selection of tool depends on the job to be done, and you haven't
provided any info on what you intend to use the boxes for so I can only
generalize.
If your network is carrying traffic of a single priority level or if it just
can't get congested then you'll be fine (well you'll still have to bear the
stupid BGP implementation in Junos)
If the above is not your case then save yourself a bunch of trouble and go
with ASR9k line instead.
We run BGP on Junos and have no complaints, fundamentally.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2017-10-31 13:30:11 UTC
Permalink
Sent: Tuesday, October 31, 2017 5:59 AM
The selection of tool depends on the job to be done, and you haven't
provided any info on what you intend to use the boxes for so I can
only generalize.
If your network is carrying traffic of a single priority level or if
it just can't get congested then you'll be fine (well you'll still
have to bear the stupid BGP implementation in Junos) If the above is
not your case then save yourself a bunch of trouble and go with ASR9k
line instead.
We run BGP on Junos and have no complaints, fundamentally.
Well glad for you,

But I actually do mind the fact that,

1)BGP tables (e.g. bgp.l3vpn.0) are created only at the instance when PE needs to store received MP-BGP routes in them.
-this is very confusing when coming from vendor where all tables are always used in a "full-duplex" mode.
-these BGP tables are only used for routes received over MP-BGP -but just not sure why local VPN routes are dumped to these (seems like a waste of resources and source of confusion)

2)VRFs are not using BGP tables to advertise routes to RRs/Other-PEs or to each other.
-that's why you need to apply MP-BGP export policies only at each individual VRF as vrf-export policy -or if you want to do it at a global level MP-BGP session level you have to use the “vpn-apply-export” knob -which places the policy configured at global level at the end of each VRF's export policy.
-and yes you guessed it, the “advertise-from-main-vpn-tables” does not do the trick -even it is supposed to move all MP-BGP sessions to their respective common rib out which resets sessions -see point 3) below.
-and for some reason it's still not the same rib out as used by RRs -cause on RRs/ASBRs you actually can use export policies directly on MP-BGP sessions.

3) BGP session is reset each time peer moves to a different update group(rib out) -not minding the inefficiency where

4) BGP creates multiple identical copies of rib out -just based on configured peer groups.

All this suggests to me that this was somehow cobbled together over the years with no master plan. Yes it routes, somehow, but because it's so complex troubleshooting what's going on under the hood (e.g. why it takes 5min to import route from bgp.l3vpn.0 to newly added VRF.inet.0) is a nightmare.
Does not seem like "carrier grade" to me.

adam


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermai
Mark Tinka
2017-10-31 14:28:30 UTC
Permalink
Post by a***@netconsultings.com
But I actually do mind the fact that,
1)BGP tables (e.g. bgp.l3vpn.0) are created only at the instance when PE needs to store received MP-BGP routes in them.
-this is very confusing when coming from vendor where all tables are always used in a "full-duplex" mode.
-these BGP tables are only used for routes received over MP-BGP -but just not sure why local VPN routes are dumped to these (seems like a waste of resources and source of confusion)
2)VRFs are not using BGP tables to advertise routes to RRs/Other-PEs or to each other.
-that's why you need to apply MP-BGP export policies only at each individual VRF as vrf-export policy -or if you want to do it at a global level MP-BGP session level you have to use the “vpn-apply-export” knob -which places the policy configured at global level at the end of each VRF's export policy.
-and yes you guessed it, the “advertise-from-main-vpn-tables” does not do the trick -even it is supposed to move all MP-BGP sessions to their respective common rib out which resets sessions -see point 3) below.
-and for some reason it's still not the same rib out as used by RRs -cause on RRs/ASBRs you actually can use export policies directly on MP-BGP sessions.
Assuming you're using this for Internet in a VRF, I wouldn't know. I've
done my best to stay away from this topology.

If you're talking about classic MPLS/VPN's (l3vpn's), we do a lot more
stuff in Global than in VRF's for it matter for us. But I do see you
concern.
Post by a***@netconsultings.com
3) BGP session is reset each time peer moves to a different update group(rib out) -not minding the inefficiency where
This has annoyed everyone for some time now.

As you know, there are inelegant workarounds, but perhaps if it bothers
you this much, perhaps start talking about an ER with your AM.
Post by a***@netconsultings.com
4) BGP creates multiple identical copies of rib out -just based on configured peer groups.
Same as above.
Post by a***@netconsultings.com
All this suggests to me that this was somehow cobbled together over the years with no master plan. Yes it routes, somehow, but because it's so complex troubleshooting what's going on under the hood (e.g. why it takes 5min to import route from bgp.l3vpn.0 to newly added VRF.inet.0) is a nightmare.
Does not seem like "carrier grade" to me.
You're probably right, but we pick our battles. In the grand scheme of
things, after all is said and done, this isn't fundamentally one of them
that will determine whether we go MX or ASR.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco
a***@netconsultings.com
2017-11-01 09:08:11 UTC
Permalink
Sent: Tuesday, October 31, 2017 2:29 PM
You're probably right, but we pick our battles. In the grand scheme of things,
after all is said and done, this isn't fundamentally one of them that will
determine whether we go MX or ASR.
True that,
SW I can work around, it's the HW performance and flaws/compromises that concerns me as it's much harder to convince vendors to fix those and sometimes these even can't be fixed.

adam

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2017-11-01 09:11:58 UTC
Permalink
Post by a***@netconsultings.com
True that,
SW I can work around, it's the HW performance and flaws/compromises that concerns me as it's much harder to convince vendors to fix those and sometimes these even can't be fixed.
True, partially why I've tried to stay away from off-the-shelf silicon.
Seems like it's getting better, and I hope the day will come where there
aren't any major restrictions that concern us enough not to invest in
such boxes for routing (switching is fine).

But you must have some really unique hardware-related issues, then.
Between all the existing array of Trio-based MPC's/MIC's, and now the
MX10003 with the 3rd-generation Trio chip, one would hope there is an
answer for you in there.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2017-11-01 15:17:23 UTC
Permalink
Sent: Wednesday, November 01, 2017 9:12 AM
But you must have some really unique hardware-related issues, then.
Between all the existing array of Trio-based MPC's/MIC's, and now the
MX10003 with the 3rd-generation Trio chip, one would hope there is an
answer for you in there.
Well it's as simple as having high-priority traffic protected under DDoS on MX platform (to have behaviour consistent with PTX or old T4k).
And yes there is a light at the end of the tunnel, since 17.2R1 all MX cards since mpc2-ng till mpc9 will have tuneable pre-classifier, so you can change the incorrect defaults.
https://www.juniper.net/documentation/en_US/junos/topics/concept/cos-ingress-oversubscription-at-pfe.html
Though I think you'd need to ask for this on MX10003 MPCs.

adam


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Loading...