Discussion:
[c-nsp] BGP DFZ convergence time - FIB programming
Robert Hass
2018-10-05 07:17:05 UTC
Permalink
Hi
I'm looking for share experiences regarding time needed to program full DFZ
table (710K IPv4 prefixes) on NCS 5500 boxes.

Right now we testing competitors (Jericho based boxes) and results are not
impressive - time needed to program is aroud 2min 30sec up to 3min.

How fast NCS 5500 is handing FIB programming ?

Rob
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Łukasz Bromirski
2018-10-05 07:23:38 UTC
Permalink
Robert,
Post by Robert Hass
Hi
I'm looking for share experiences regarding time needed to program full DFZ
table (710K IPv4 prefixes) on NCS 5500 boxes.
Right now we testing competitors (Jericho based boxes) and results are not
impressive - time needed to program is aroud 2min 30sec up to 3min.
How fast NCS 5500 is handing FIB programming ?
Please take a look here:
https://xrdocs.io/cloud-scale-networking/tutorials/ncs5500-fib-programming-speed/ <https://xrdocs.io/cloud-scale-networking/tutorials/ncs5500-fib-programming-speed/>
--
Łukasz Bromirski
CCIE R&S/SP #15929, CCDE #2012::17, PGP Key ID: 0xFD077F6A
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puc
James Bensley
2018-10-05 09:09:15 UTC
Permalink
Post by Łukasz Bromirski
Robert,
Post by Robert Hass
Hi
I'm looking for share experiences regarding time needed to program full DFZ
table (710K IPv4 prefixes) on NCS 5500 boxes.
Right now we testing competitors (Jericho based boxes) and results are not
impressive - time needed to program is aroud 2min 30sec up to 3min.
How fast NCS 5500 is handing FIB programming ?
https://xrdocs.io/cloud-scale-networking/tutorials/ncs5500-fib-programming-speed/ <https://xrdocs.io/cloud-scale-networking/tutorials/ncs5500-fib-programming-speed/>
--
Łukasz Bromirski
CCIE R&S/SP #15929, CCDE #2012::17, PGP Key ID: 0xFD077F6A
There is an extra video not linked in that article for a comparison of
the external TCAM to non-external:


Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether
Ted Pelas Johansson
2018-10-05 07:23:40 UTC
Permalink
Hi Rob,

Please have a look at Niclas blog post about the NCS5500, it should answer your question:
https://xrdocs.io/cloud-scale-networking/tutorials/ncs5500-fib-programming-speed/

Best Regards
Ted

Sent while walking

On 5 Oct 2018, at 09:18, Robert Hass <***@gmail.com<mailto:***@gmail.com>> wrote:

Hi
I'm looking for share experiences regarding time needed to program full DFZ
table (710K IPv4 prefixes) on NCS 5500 boxes.

Right now we testing competitors (Jericho based boxes) and results are not
impressive - time needed to program is aroud 2min 30sec up to 3min.

How fast NCS 5500 is handing FIB programming ?

Rob
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net<mailto:cisco-***@puck.nether.net>
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

******** IMPORTANT NOTICE ********
The content of this e-mail is intended for the addressee(s) only and may contain information that is confidential and/or otherwise protected from disclosure. If you are not the intended recipient, please note that any copying, distribution or any other use or dissemination of the information contained in this e-mail (and its attachments) is strictly prohibited. If you have received this e-mail in error, kindly notify the sender immediately by replying to this e-mail and delete the e-mail and any copies thereof.

Tele2 AB (publ) and its subsidiaries (“Tele2 Group”) accepts no responsibility for the consequences of any viruses, corruption or other interference transmitted by e-mail.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail
Mark Tinka
2018-10-05 07:25:35 UTC
Permalink
Post by Robert Hass
Hi
I'm looking for share experiences regarding time needed to program full DFZ
table (710K IPv4 prefixes) on NCS 5500 boxes.
Right now we testing competitors (Jericho based boxes) and results are not
impressive - time needed to program is aroud 2min 30sec up to 3min.
How fast NCS 5500 is handing FIB programming ?
I don't have any of these boxes (or chipsets) in my network, but out of
curiosity, how long does it take to load the same routes in RIB,
depending on if these are learned via iBGP (from a route reflector,
likely) or via eBGP (from an upstream provider, for example)?

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Phil Bedard
2018-10-11 12:46:08 UTC
Permalink
Nicolas covered the RIB speed in the blog as well, and had varying results, but on average about 10s for today's full table. The bottleneck in the operation is usually the advertising router, not the local router populating the RIB. I don't think there was a test where the FIB took more than 30 seconds.

Thanks,
Phil
Post by Robert Hass
Hi
I'm looking for share experiences regarding time needed to program full DFZ
table (710K IPv4 prefixes) on NCS 5500 boxes.
Right now we testing competitors (Jericho based boxes) and results are not
impressive - time needed to program is aroud 2min 30sec up to 3min.
How fast NCS 5500 is handing FIB programming ?
I don't have any of these boxes (or chipsets) in my network, but out of
curiosity, how long does it take to load the same routes in RIB,
depending on if these are learned via iBGP (from a route reflector,
likely) or via eBGP (from an upstream provider, for example)?

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net
James Bensley
2018-10-11 13:56:52 UTC
Permalink
Post by Robert Hass
Post by Robert Hass
Hi
I'm looking for share experiences regarding time needed to program
full DFZ
Post by Robert Hass
table (710K IPv4 prefixes) on NCS 5500 boxes.
Right now we testing competitors (Jericho based boxes) and results
are not
Post by Robert Hass
impressive - time needed to program is aroud 2min 30sec up to 3min.
How fast NCS 5500 is handing FIB programming ?
I don't have any of these boxes (or chipsets) in my network, but out of
curiosity, how long does it take to load the same routes in RIB,
depending on if these are learned via iBGP (from a route reflector,
likely) or via eBGP (from an upstream provider, for example)?
Mark.
Hi Mark,

What makes you think there would be a difference in time to load eBGP learned routes vs. iBGP learned routes? Something from personal experience?

Am I being naive here, I'd expect them to be the same? An UPDATE from an eBGP or iBGP peer could contain the same NLRI with all of the same attributes so I would expect them to pass through the same pipe line of BGP parsing and processing code?

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Robert Raszuk
2018-10-11 14:30:28 UTC
Permalink
Post by James Bensley
Hi Mark,
What makes you think there would be a difference in time to load eBGP
learned routes vs. iBGP learned routes? Something from personal experience?
James,

I think the difference Mark may have in mind that iBGP routes say from RR
are advertised from RR's control plane. Many RRs today are just x86 control
plane boxes with no forwarding.

On the other hand number of implementations before sending route over eBGP
must make sure that those routes are installed in data plane locally before
being advertised.

Of course non of this really matters if those routes are already installed
when you trigger advertisements to UUT.

There are also few more little "delays" for eBGP vs iBGP depending on your
BGP code base - regarding policy processing or origin validation :).

Best,
R.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-11 15:25:22 UTC
Permalink
Post by Robert Raszuk
James,
I think the difference Mark may have in mind that iBGP routes say from
RR are advertised from RR's control plane. Many RRs today are just x86
control plane boxes with no forwarding. 
On the other hand number of implementations before sending route over
eBGP must make sure that those routes are installed in data plane
locally before being advertised. 
Of course non of this really matters if those routes are already
installed when you trigger advertisements to UUT. 
There are also few more little "delays" for eBGP vs iBGP depending on
your BGP code base - regarding policy processing or origin validation :).
Yes, all the above :-).

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.ne
James Bensley
2018-10-11 20:37:59 UTC
Permalink
I think the difference Mark may have in mind that iBGP routes say from RR are advertised from RR's control plane. Many RRs today are just x86 control plane boxes with no forwarding.
On the other hand number of implementations before sending route over eBGP must make sure that those routes are installed in data plane locally before being advertised.
Of course non of this really matters if those routes are already installed when you trigger advertisements to UUT.
Hi Rob, Mark,

So that's not really an apples to apples comparison then. Are you
interested in seeing the different in vRR vs. physical RR vs physical
PE?
There are also few more little "delays" for eBGP vs iBGP depending on your BGP code base - regarding policy processing or origin validation :).
There is nothing to stop me creating a horribly complex iBGP policy so
I'm not sure policy length that is a factor. I was referring strictly
to the code performance, e.g. router model A, with firmware version B,
receives the exact same route set from an eBGP and iBGP neighbor (and
those two neighbors are the exact same spec). Is there till scope for
a difference in convergence time here?

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Robert Raszuk
2018-10-11 21:47:27 UTC
Permalink
Post by James Bensley
There is nothing to stop me creating a horribly complex iBGP policy
Decent bgp implementation should not allow iBGP learned routes to be
subject to any bgp policy as doing so will easily result with inconsistent
routing. So on this ground there can be bgp code path differences on how
the routes are processed.

But if you have accept all policy for eBGP without any inbound
modifications (match/set) then iBGP vs eBGP should indeed be pretty
similar. Other processing ie. best path selection, rib insertion then fib
insertion should be quite similar timing wise regardless of the type of the
route.

Thx,
R.
Post by James Bensley
Post by Robert Raszuk
I think the difference Mark may have in mind that iBGP routes say from
RR are advertised from RR's control plane. Many RRs today are just x86
control plane boxes with no forwarding.
Post by Robert Raszuk
On the other hand number of implementations before sending route over
eBGP must make sure that those routes are installed in data plane locally
before being advertised.
Post by Robert Raszuk
Of course non of this really matters if those routes are already
installed when you trigger advertisements to UUT.
Hi Rob, Mark,
So that's not really an apples to apples comparison then. Are you
interested in seeing the different in vRR vs. physical RR vs physical
PE?
Post by Robert Raszuk
There are also few more little "delays" for eBGP vs iBGP depending on
your BGP code base - regarding policy processing or origin validation :).
There is nothing to stop me creating a horribly complex iBGP policy so
I'm not sure policy length that is a factor. I was referring strictly
to the code performance, e.g. router model A, with firmware version B,
receives the exact same route set from an eBGP and iBGP neighbor (and
those two neighbors are the exact same spec). Is there till scope for
a difference in convergence time here?
Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
heasley
2018-10-11 22:13:12 UTC
Permalink
Post by Robert Raszuk
Decent bgp implementation should not allow iBGP learned routes to be
subject to any bgp policy as doing so will easily result with inconsistent
routing.
That is not entirely true; yes, one must be careful when applying policy
to internal sessions, but that does meant that there are no legitimate
applications and thus no implementation should prevent it.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Robert Raszuk
2018-10-12 00:34:52 UTC
Permalink
While we are a bit diverging from original topic and while indeed under
very careful application there could be some use cases for outbound bgp
policies even for ibgp I have never seen one to be applied inbound - which
was the point of my comment.

So for educational purposes could you describe some real valid use cases to
apply bgp policies on routes *received* over IBGP ?

Thx,
Robert.
Post by Robert Raszuk
Post by Robert Raszuk
Decent bgp implementation should not allow iBGP learned routes to be
subject to any bgp policy as doing so will easily result with
inconsistent
Post by Robert Raszuk
routing.
That is not entirely true; yes, one must be careful when applying policy
to internal sessions, but that does meant that there are no legitimate
applications and thus no implementation should prevent it.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Tim Warnock
2018-10-12 01:14:03 UTC
Permalink
-----Original Message-----
Robert Raszuk
So for educational purposes could you describe some real valid use cases to
apply bgp policies on routes *received* over IBGP ?
Thx,
Robert.
Setting local preference?

Rewriting next hop?
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
James Bensley
2018-10-12 06:16:34 UTC
Permalink
Post by Robert Raszuk
-----Original Message-----
Of
Robert Raszuk
So for educational purposes could you describe some real valid use
cases to
apply bgp policies on routes *received* over IBGP ?
Thx,
Robert.
Setting local preference?
Rewriting next hop?
Yeah, stuff like remotely triggered black hole routing is often applied as in inbound iBGP policy.

Cheers,
James.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2018-10-12 12:15:37 UTC
Permalink
James Bensley
Sent: Friday, October 12, 2018 7:17 AM
Post by Robert Raszuk
-----Original Message-----
Of
Robert Raszuk
So for educational purposes could you describe some real valid use
cases to
apply bgp policies on routes *received* over IBGP ?
Thx,
Robert.
Setting local preference?
Rewriting next hop?
Yeah, stuff like remotely triggered black hole routing is often applied as
in
inbound iBGP policy.
In order to avoid using ingress policy on iBGP session towards RRs I'm
setting the dummy next-hop on export from the trigger VRF, but yes I had to
add the dummy next-hop onto RRs for them to have a valid next-hop and could
relay the route further.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-13 16:36:12 UTC
Permalink
Post by a***@netconsultings.com
In order to avoid using ingress policy on iBGP session towards RRs I'm
setting the dummy next-hop on export from the trigger VRF, but yes I had to
add the dummy next-hop onto RRs for them to have a valid next-hop and could
relay the route further.
For us, customer-triggered RTBH is provided as standard for all eBGP
sessions with customers. Once they send us the right community with
their own routes, we just pass that community on to the RR's via iBGP.
The RR will relay those routes to all other devices in the network, and
as long as those devices see that community (and are permitted to act on
said community), traffic to the routes that carry the community is
dropped locally on those devices.

For manually-triggered RTBH (i.e., the NOC have to do it because the
customer does not know how to or does not want to do it themselves), we
have a dedicated router in the network that can be used as the launchpad
for RTBH signals, managed by the NOC.

We don't perform any ingress iBGP policy for RTBH anywhere in the network.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-13 16:38:15 UTC
Permalink
Post by Mark Tinka
We don't perform any ingress iBGP policy for RTBH anywhere in the network.
Spoke too soon... with peering routers being the exception, as we
tightly control which routes are made available to the peering routers;
we don't hold a full table there.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2018-10-15 09:32:02 UTC
Permalink
Sent: Saturday, October 13, 2018 5:38 PM
We don't perform any ingress iBGP policy for RTBH anywhere in the network.
Spoke too soon... with peering routers being the exception, as we tightly
control which routes are made available to the peering routers; we don't
hold a full table there.
Ha, same here, twofold actually,

1) Started using flowspec for dealing with DDoS once inside the network -much better granularity, no need to throw customer over the board instantly. And the RTBH and Scrubbing is used to protect peering links -but that's not related to iBGP session ingress policies discussion.

2) Actually using ingress/egress iBGP filtering all over the place due to multi-planar RR infrastructure I created, -so Robert that's another use case for you :)

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Tim Warnock
2018-10-13 20:41:18 UTC
Permalink
For us, customer-triggered RTBH is provided as standard for all eBGP sessions
with customers. Once they send us the right community with their own
routes, we just pass that community on to the RR's via iBGP. The RR will relay
those routes to all other devices in the network, and as long as those devices
see that community (and are permitted to act on said community), traffic to
the routes that carry the community is dropped locally on those devices.
Sounds standard practice.
We don't perform any ingress iBGP policy for RTBH anywhere in the network.
We match incoming routes tagged with RTBH from the RR and rewrite to the appropriate next-hop "/dev/null" by address family, which it sounds a lot like what you guys do.

I would consider this to be "policy". Why would you not?
Mark.
-Tim.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Robert Raszuk
2018-10-13 21:01:28 UTC
Permalink
Post by Tim Warnock
Sounds standard practice.
This way of (D)DoS mitigation results with cutting the poor target
completely out of the network ... So the attacker succeeded very well with
your assistance as legitimate users can not any more reach the guy. Is it
his fault that he got attacked ?

Do you also do the same if this is transit traffic ?

When do you remove such black hole ? You look at the ingress counters to
the target ?

Did you ever instead of the above considered automation to apply at least
src-dst + ports filters with Flow Spec and just rate limit the malicious
distributed flows (rfc5575) ?

Thx,
R.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Gert Doering
2018-10-13 21:17:21 UTC
Permalink
Hi,
Post by Robert Raszuk
Post by Tim Warnock
Sounds standard practice.
This way of (D)DoS mitigation results with cutting the poor target
completely out of the network ... So the attacker succeeded very well with
your assistance as legitimate users can not any more reach the guy. Is it
his fault that he got attacked ?
No, but sometimes there is no other remedy. Like, a customer has a
larger network (say, IPv4 /23), and a single IP is attacked, filling
his pipe. If you drop that single address, the rest of the network
can operate normally.

Would it be better to stop the attack without taking the target host
offline? Of course!


[..]
Post by Robert Raszuk
Did you ever instead of the above considered automation to apply at least
src-dst + ports filters with Flow Spec and just rate limit the malicious
distributed flows (rfc5575) ?
Indeed, this would be superiour, but not all our hardware can do this,
and (as far as I'm aware) none of our upstream providers support this - so
if we cannot stand the volume anymore (upwards of ~50 Gbit/s), all we
can do is signal upstream "please do not deliver traffic to that target
IP"...

(What we do is rate-limit all the cheap crap, like NTP, fragments, DNS
responses to not white-listed well-known recursor addresses, to reasonable
limits - so as long as our ingress pipes are not full, we do not blackhole
destination addresses.)

gert
--
"If was one thing all people took for granted, was conviction that if you
feed honest figures into a computer, honest figures come out. Never doubted
it myself till I met a computer with a sense of humor."
Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany ***@greenie.muc.de
Nick Hilliard
2018-10-13 21:22:51 UTC
Permalink
Post by Robert Raszuk
This way of (D)DoS mitigation results with cutting the poor target
completely out of the network ... So the attacker succeeded very well
with your assistance as legitimate users can not any more reach the
guy.
service providers usually care more about the continuity of their
network than the uptime of a single IP address. If a network is hit by
a ddos which is 10x the ingress transit + peering capacity, most
sensible people are going to blackhole the affected IP address and also
signal to upstreams that it should be blackholed. Unless you set out to
design a network with enough capacity to withstand giant ddos events,
rtbh with upstream blackholing will remain a useful tool in the box.
Post by Robert Raszuk
Is it his fault that he got attacked ?
Saturated network links don't have an opinion on blame.

But to bring things back to the topic, yes there are several
well-established cases where policy is applied to ingress ibgp sessions.

Nick
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-14 09:57:45 UTC
Permalink
Post by Robert Raszuk
This way of (D)DoS mitigation results with cutting the poor target
completely out of the network ... So the attacker succeeded very well
with your assistance as legitimate users can not any more reach the
guy. Is it his fault that he got attacked ? 
Do you also do the same if this is transit traffic ? 
When do you remove such black hole ? You look at the ingress counters
to the target ? 
Did you ever instead of the above considered automation to apply at
least src-dst + ports filters with Flow Spec and just rate limit the
malicious distributed flows  (rfc5575) ?
We provide 2 options - the poor man's one (which completes the attack)
and the paid-for one, which cleans the attack.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisc
Nick Hilliard
2018-10-13 21:04:23 UTC
Permalink
Post by Tim Warnock
I would consider this to be "policy". Why would you not?
on ios, you can hack around this by setting the NHIP in the announcing
router to be an ip address which is directly routed to null0 on the RR
client, i.e. no explicit policy required. On XR, you need to apply a
route-policy to declare "set next-hop discard", so explicit policy
required there.

Nick
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-14 09:56:24 UTC
Permalink
Post by Tim Warnock
We match incoming routes tagged with RTBH from the RR and rewrite to the appropriate next-hop "/dev/null" by address family, which it sounds a lot like what you guys do.
I would consider this to be "policy". Why would you not?
We do the above on peering routers.

On edge routers, it's not necessary, as the application of the RTBH
communities is a local action (which is where it would start from,
anyway, if you look at the overall network).

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
a***@netconsultings.com
2018-10-12 12:12:07 UTC
Permalink
Tim Warnock
Sent: Friday, October 12, 2018 2:14 AM
-----Original Message-----
Of Robert Raszuk So for educational purposes could you describe some
real valid use cases to apply bgp policies on routes *received* over
IBGP ?
Thx,
Robert.
Setting local preference?
Rewriting next hop?
I think Robert was interested in use cases not what attribute can be set in
ingress on an iBGP session.

This question got me thinking actually,

There are several possible attach-points that can be used to manipulate the
iBGP route before it gets installed into RIB, (ibgp session/vrf
import/BGP->RIB)
Now would you say all of these are sort of in the inbound direction from the
iBGP perspective? After all the iBGP route would be subject to all these
before it gets installed to RIB.

With regards to the use cases,
I think that one common trait of all the use cases relying on either of the
above mentioned attach points, is the need to manipulate how the route is
treated locally on the receiving BGP speaker -which ,driven by the use case,
would be in contrary with how the same route is treated on all other
speakers in the AS. (think one size does not fit all)
Thinking about it this need for ingress policy on iBGP sessions is rooted in
the fact that one (by default and no hacks) can't "process" a BGP route for
the same prefix multiple times where each copy would be intended only for a
specific receiver or a set of receivers.
The specifics of how that is accomplished are irrelevant, but what stays is
that a policy in "ingress" direction is indeed required in these use cases.


//yes I know I should not be using term "bgp route" in context of bgp
process and should be using term "prefix" or "path" instead.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-13 16:29:16 UTC
Permalink
Post by a***@netconsultings.com
There are several possible attach-points that can be used to manipulate the
iBGP route before it gets installed into RIB, (ibgp session/vrf
import/BGP->RIB)
Now would you say all of these are sort of in the inbound direction from the
iBGP perspective? After all the iBGP route would be subject to all these
before it gets installed to RIB.
We generally apply quite a bit of policy at the eBGP edge, i.e., with
customers, peers or upstreams. Once those policies are applied, we just
pass them as they are toward the RR's via iBGP, not touching them again.

The policy we would apply on the edge device itself that is not part of
an eBGP session would be locally-generated routes, e.g., routes that
define all point-to-point addresses used for customers attached to that
device, peers attached to that device where we are providing the
point-to-point IP addresses, e.t.c.

Then on the RR's, we can determine whether we want those routes to be
seen by the entire network, or some section of the network, based on the
BGP communities that the route arrives with at the RR. This is
influenced by the type of service the route is carrying, be it a
customer or another internal function.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-13 16:23:27 UTC
Permalink
Post by Robert Raszuk
While we are a bit diverging from original topic and while indeed under
very careful application there could be some use cases for outbound bgp
policies even for ibgp I have never seen one to be applied inbound - which
was the point of my comment.
So for educational purposes could you describe some real valid use cases to
apply bgp policies on routes *received* over IBGP ?
On our edge routers, it's for telling it how to handle BGP communities
that define routes whose traffic needs to be scrubbed (DoS mitigation).

On peering routers, RTBH.

But yes, that's about it.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-13 16:18:52 UTC
Permalink
Post by Robert Raszuk
Decent bgp implementation should not allow iBGP learned routes to be
subject to any bgp policy as doing so will easily result with
inconsistent routing. So on this ground there can be bgp code path
differences on how the routes are processed.
We actually do a lot of policy on iBGP sessions, both on the RR's and
their clients.

95% of the policies are in the outbound direction, with the rest being
inbound.

In our case, we can support both consistent and inconsistent internal
routing at the same time. It can depend on device functions, device
location or device resources.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Mark Tinka
2018-10-11 15:24:38 UTC
Permalink
Post by James Bensley
Hi Mark,
What makes you think there would be a difference in time to load eBGP learned routes vs. iBGP learned routes? Something from personal experience?
Am I being naive here, I'd expect them to be the same? An UPDATE from an eBGP or iBGP peer could contain the same NLRI with all of the same attributes so I would expect them to pass through the same pipe line of BGP parsing and processing code?
Depends on the capabilities of the other peer.

For us in iBGP, the CSR1000v is mega quick, we can load a full IPv4
table in less than a minute. I've not seen the same performance from
eBGP sessions, even when latency and bandwidth are not an issue.

Mark.
_______________________________________________
cisco-nsp mailing list cisco-***@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
Loading...