You are on page 1of 14

- In this lesson, we're gonna look at integration. Integration, of course, is not a new idea.

This is not new or novel with respect to the reactive programming paradigm but if you've
ever used spring before you know there's a robust support for all sorts of messaging and
integration based scenarios. So for example, in the spring frame work you have the JANUS
template, we have the AMQB template in spring, in Kpey we have the Kofka template. I
mean, there's good support throughout the ecosystem, throughout the spring ecosystem
for messaging and part of that, of course, is the Kofka template inside of the various
modules that support different messaging technologies and then the, sort of the
counterpart to that, the consumers is the thing called message list or container. These form
foundational pieces but of course at some point you wanna think about your code in terms
of higher order sort of flow since we have this idea of spring integration. And spring
integration is a framework for building event driven architectures. It, in turn, has at its
heart this concept of a message channel. And messages, enveloped objects that contain a
payload and headers that describe that payload are called messages, messages flow
through these different channels and you string together these different channels into a
sequence, into a chain and the result is an integration flow. Message in, message out,
message in, message out, message in, message out. You can have as many different
components as you want. And these components, you know, they can do things like
splitting and routing and so on. So these components are written in terms of the message
that comes and the message that goes out. They're pretty much stateless, they do very little
state usually, and that state is it's intenental, you know, it can live in the outside the
component but really there's no state in the component itself usually. Now, if this all
sounds if like it would lend itself very nicely to reactive programming, right, if we have the
asynchronous sequences of data coming in at a potentially unbandered quantity over a
potentially unbandered amounts of time, well, yes, it does exactly. So that's what we're
going to talk about, essentially We're going to zoom in this lesson on integration with
reactive programming. We're going to look for such spring integration and then spring
cloud stream so let's go build a new service. We're going to call this the consumer and we
use the reactive web support, the reactive spring cloud stream support, integration, and we
use Kafka. So I've got reactive spring cloud stream, I've got integration, I've got kafka, I'm
going to use lombok as well. And that's it, that's our consumer. We'll start off with that and
open this up in our ID. So, in the consumer, we are going to, first of all look at the spring
integration. So let's look at spring integration. Spring integration, as I say, is a messaging
framework. It's got it's heart, it's content is it's messaging flow, and let's suppose I have a
flux that generates new dates. So I'm going to emit a date each time into this sink. And that
gives me a flux of objects. I'm going to turn that into a flux of dates here, like so, so date, and
I wanna turn that flux of dates into a delayed stream so I'm gonna delay every single
published or remitted item by one second. So I'll say dates. And now, I'm going to create an
integration flow. So say integration flows dot from, and here, normally I would get the
messages from a message channel or even a supplier but now, I can get the data from a
publisher. And think about it. Anything that comes through a publisher can be used here
but of course, the kind of publisher that we're expecting is a publisher of message of
question mark. So I need to adapt this code just a bit. So I'm going to say dates dot map X.
I'm gonna wrap that payload, the date, in that message builder so we're going to build in
message out of it. Then we can handle it. So we're going to use one of these various
operators that we can use here to process the data. So this is how you build a normal
integration flow. You can see here, I've go the payload system out. The date is payload dot
to instant dot to string and of course there are headers that describe the payload so I can
iterate over all that as well here so system out K equals V. Alright, so there's my headers
and the payload itself. And if we run this, we should see that for everything the next, you
know, however long this program is running, there'll be a new publisher emitted into the
integration flow so there you go. Now, this is a very convenient cause now you can use all
the operators that spring integration provides. You can do things like splitting and routing
and transformations of messaging that flow through the system and aggregation and
transformers and all these different things you wanna do. There are operators here that
you can use to chain together this processing pipeline. The result is this very simple
processing code written in terms of these message in, message out. There's a pipes and
filters architecture writ across the messaging system, your enterprise architecture. Now, I
used a publisher here that I just generated on the fly but of course there's no reason you
couldn't call a word client for example. The web client could produce a publisher that gives
you new values that are being produced from a web server. Maybe you have a service and
events stream that you're consuming and you wanna process each message using spring
integration so the publisher that comes back, you could pass it, after a bit of mapping, you
could pass it into this integration flow. You could use enterprise integration patterns to
process this data. This gives you a very, very powerful DSL that goes above and beyond
what you get with a regular flux. Of course, if you wanna terminate the flow, you could
terminate the flow by writing the data out to another channel. Right, I could have another
message channel here. Message and then that channel itself could be connected to some
other system so, you know, output, turn, message channels dot direct dot get. Alright, so
there's my message channel. I'm going to just tell it to send the message out on the output
channel and voila. I can have another flow that listens for the messages coming out of this
channel on the other side and then process it. I could, instead of sending it out into a
channel, I can also handle using an adaptor. Right, so I can actually say adaptor handle and
here, I can do whatever I want. I can actually take the data and send it to an outbound
adaptor and using something like AmQP or file, you know, I don't have any outbound
adapters here but see if I did pon.xml, spring integration amqp, feed, file, http, java,disl, jmx,
kafka, et cetera. These are just the ones I have in my local machine. Obviously, they're a
good deal more outside above and beyond this so you can use any of those. Spring
integration dot version that gives you the dependency that you can use to process data with
kafka or to use the file system for example. Once that's on the class path, you can then
configure a, what did I use, I used a file, files dot outbound adaptor, new file, right, and
whatever this could be. Like that, et cetera. So that outbound adaptor could write it to
home, whatever, foo bar, okay, so you can do all sorts of interesting things here and the
data can originate as a result of a reactive string of publisher and you can get reactive
strings publishers from a lot of different places. You can even get a reactive strings
publisher from this integration flow, right, I can actually say toReactivePublisher and then
put that as a input of something else. So you can do all sorts of very interesting things with
spring integration in of itself but that's it. Spring integration at the end of the day is geared
towards integrating disparate services and systems and data and as a result, it's going to
have a lots of connectiveness for older systems.
++++++++++++++++++++++++++++++++++++++++++++++++++++++

If we say, you know, it's the modern age and we're moving forward with great pace and I
can take for granted that in this day and age that I'm not gonna use an FTP server to
connect my messaging based systems, then we can move up the abstraction stack a little
bit. We can use Spring Cloud Stream. And Spring Cloud Stream makes it dead simple to
connect microservices in terms of messaging. So that's what we're gonna do here. We're
gonna build a Spring Cloud Stream based consumer and then publish messages into that
consumer from a producer. Here, Spring Cloud Stream at its heart the concept of a message
channel and a binding and these bindings are powered by binders and we have in the class
path here, ignoring this file stuff, let's get rid of that. We have in the class path here Spring
Cloud Stream Binder Kafka. And so, I am going to create a binding that says consumer
channels. And here I need to describe some input channels, right, I say message channel
input, and I give it a name, and I can give it whatever name I want so, string input, input,
like so. Voila or it can be, you know, it could be my logical perspective of any other
downstream system. So, orders, for example, orders, and I can call it whatever I want then,
right? So, any channel you want. In this case it can be subscribable channel as well, which is
something you can hang a listener off of. And then I need to tell Spring Innovation or Spring
Cloud Stream rather, about that binding. I can say, here is the consumer channels dot class
and I can use that to consumer channels to process the incoming data. Then, I can create a
stream listener, right? So I can say, add stream listener, public void process and I can tell
the stream listener that the data that it should listen to is coming in on the Orders Channel
and I want the data to be delivered to me as a publisher of strings, for example, or a
publisher of dates or whatever. So, incoming string, let's assume that the producer is
sending data and those are the payloads of type string. So now I can process that data, I can
say incoming strings dot map x, it'll be a string, to upper case and then I can process it. I can
say, subscribe, system out print line and then write the data out. So normally in Spring
Cloud Stream, you'd say something like this, you'd say consumer channels dot orders and
then you'd have a single item, like this string item and the method would be invoked. The
process method would be invoked for every single new item that gets omitted off of the
incoming channel, off of the incoming Kafka topic or RabbitMQQ, but in this case we're
using a stream, a reactive publisher, so this method gets called once when the application
starts up and it's just continues to process data that arrive off of this incoming publisher.
That means that we can do some interesting things. We can do windowing, for example. I
can say I wanna window the data over 10 seconds or whatever, or I can say, accumulate all
10 seconds worth of data and then I can map over it. I can collect it into a map. I can do all
sorts of interesting things with these operators here. I can do serve lightweight to stream
processing, if you will. Alright, so that's my consumer. Let's see what that looks like. This
annotation has to be in a configuration annotation or class fit has a configuration on it, and
of course at spring boat application by transitive annotation is also a configuration class.
With that in place we can now build a producer, and actually I created this consumer
channel to demonstrate the concept, but of course you can have multiple of these too by the
way. You can have multiple. You can have customers or whatever. You can change this to be
some sort of string, but it turns out that subscribing to data from one well known input is
such a common thing that there's actually a prebuilt interface called sink that we can use.
And that comes with Spring Cloud Stream, so we're actually going to use that here, and
we'll just use sink dot input. Now, this channel definition, this configuration so far is largely,
it's for our benefit. The Java code has nothing to do with how Spring Cloud Stream connects
this messaging flow to the integration system, so we can go to the property file here and
say spring cloud stream bindings dot input. Now, input is the name of the channel and then
we can say dot destination equals, and you know, we can call this greetings. The greetings
topic in Apache Kafka. We can also specify that we want to be part of a group, so we can say
group equals greetings group, for example. And we can say that, well I guess that's it, that's
all we need to say for now. That'll actually connect this channel name in Java code to the
topic represented by this name here, and of course if we wanted to specify how to find the
server, my kafka host, we could do that as well, but by default it'll connect to a local host,
and I happen to have Apache Kafka running in the background on my local machine. So,
let's spin this up and see what happens. Alright, so the application's up and running. Now,
we need to build a producer. Something that will write data to that same destination, so I'll
go back here and just build a producer just as an example. Alright, so producer. We don't
really need the integration bits here, but we'll bring in everything else. Hit generate. Uao
producer. Now, the producer is just gonna, I'm just gonna synthesize some data, I'm gonna
have a full loop that writes out data to an outbound channel, so I'll have a bean, type
application runner, application runner, run, and I'll say at enable binding, and we'll use
source dot class, so source dot class is the counterpart to sink, it's the producer of data. You
can see that the interface definition here is just message channel output instead of message
channel input. We're gonna create a runner, producer, and we'll take advantage of the
source, say return args, and we'll build some data, for int I equals zero, I is less than 10, I
equals plus, alright, and we'll say message builder dot with payload message build it with
payload, we're gonna say greetings, or hello number one I, okay dot build, that's our
payload, our message object, and we're gonna send that out. So, we'll say source dot output
dot send message and there we are. There's the payload, the data, et cetera all being
produced for us automatically. Now, that producer it's gonna start up and it's gonna run.
It's gonna need the same configuration we have from the other one, so spring cloud stream
bindings dot output dot destination equals and it has to match here, right. This is the
destination in Kafka, so it has to match forward slash greetings, greetings, et cetera.
Everything else pretty much the same. Indeed we can make this just a while true just to
have a constant stream of data. There we go, so that's actually a little bit more useful. Int I
equals zero. Alright, let's see what we get. We'll use a logger here, private final log, log
equals log factory get log, and just print out what's happening, log dot info sending message
dot get payload, okay start. Oh, that's gonna fail because I forgot to put it on a separate port,
so 8081. Start up again. Should've had a third sleep in there somewhere, but either way
we've got 85,000 messages in that Kafka queue now, so that's kinda cool. So, there you go,
the consumer on the other side has received all those messages. We got that consumer
transforming the data as it arrives uppercasing it, logging it out, and that's just done
automatically, it just works. Alright, so we've looked at spring integration, we looked at
how spring integration itself can can work with reactive string publishers and how you can
work with spring integration and use the enterprise integration patterns to process large
amounts of data and to talk to despaired services and data, and to integrate despaired
services and data, and then we looked at how it can play both the consumer or the
publisher of items. It can consume, for example, data coming from a web client call or some
other reactive source basically, and we also looked at how it can be turned into a publisher,
which you can then use to feed, for example, the web socket support in spring web flex. We
see how to do that in the web section. The web lesson that we have in this video. Then, we
looked at Spring Cloud Stream, we looked at Spring Cloud Stream as a way of composing
messaging based microservices. We looked at this ability to declaratively define bindings
and then have those bindings get turned into publishers thanks to the reactive Spring Cloud
Stream binder, so I have Spring Cloud Stream, Kafka, and the Spring Cloud Stream reactive
stream reactive support, so that both of those but they just work together to the point
where I can just say flux of string, incoming strings, and everything just works fine. All the
messaging. All that stuff just works fine. I'm using Kafka here but you could've just used as
well used any other binder that's supported by Spring Cloud Stream including, for example,
RabbitMQ.
+++++++++++++++++++++++++++++++++++++++++++++++

- In this lesson, we're gonna look at testing. We could do a whole video on importance of
testing, and look at how it concerns a typical Spring Developer. Indeed, I have with my co-
author Marcin Grzejszczak, done a whole video on testing. That's the Applied Continuous
Delivery live lessons video. So, I'm not gonna try and repeat all that here. What we want to
focus on is, specifically, some of the things that you may be interested in knowing about
when testing reactive applications. So, we're gonna build a very simple Spring Boot 2.0
application based on the reactive web support. So, let's go ahead a build a simple, let's say,
movie service. Use Lombok and call this the test service, sure. When we use Lombok, we
use a reactive web support. And, I think that's it, actually. That's all we need for now. All I
wanna demonstrate is some very, very basic testing with Reactor, and then some testing
with some, with a reactive web endpoint, and that's about it. From there you'll have a
baseline, and it's easy enough to find the, to find out more, okay? So, we're gonna hit
generate. That'll give us a new project. And I'll open this up here. Alright. So, now we have a
test service, and this starts simple. Let's start with a endpoint that produces a publisher,
that may produce an unbounded amount of data. So, we're gonna create a service here.
Class publisher service, just 'cause we want something that produces data. We don't really
care all that much about what the domain of the service is. So, let's say it's a flux of strings,
right? Publish. This flux is gonna be unbounded. We wanna generate something that will
continue over time, so sink dot next, new, we can send a string. Hello at instant dot now dot
to string. Send a response string. So, there ya go. Hello. And that'll produce an unbounded
amount of data. I wanna actually slow that down just a bit, so I'm gonna actually delay the
elements here. I'll say delay elements duration of seconds. So, it'll be a new value, produced
every single second, and we're gonna store that, k? So, there ya go. Flux of string, and flux of
string. There we are, and I'm gonna now return that flux. There we go. So we have a simple
thing that could potentially continue forever. So, how do ya test this? This is a, where we're
gonna focus on first is how to test this in a reactive application. Alright, so now we have an
empty unit test, conveniently generated for us by Spring Boot. We're gonna go ahead and
use the publisher service here, so I'm gonna create a instance of it. Of course, you can eject
it as well, I suppose, doesn't really matter. And we're going to test it now. Of course, that
publisher that it's gonna create for us, this dot publisher service dot publish, is unbounded,
as we know. I've delayed the value by one second, here, right? So, we know that if I wanted
to test, for example, that I got 10 records back, I'd have to wait 10 seconds, and that's not
gonna hold because in asynchronous code, potentially unbounded asynchronous amount of
code, I don't want my tests, my ci environment, I don't want it to hold up, or to be gummed
up waiting for these values to trickle out at whatever pace the publisher's gonna give them
to me. So, instead, I wanna use a step verifier to step forward in time, at a virtual rate, like
so many science fiction protagonists. So, here we're gonna say with virtual time, and we'll
give it a supplier that will in turn generate the publisher. And the publisher that comes
back, I'm gonna say I wanna take 10 records, and I wanna collect it into a list. And then, I'm
gonna say I wanna wait one hour. What's there? 10 hours. They're virtual, so who cares?
We can do whatever we want. And then I wanna confirm that when the list come back, the
accumulated list, that the list has a size of 10 records, and then I wanna verify that it's all
complete, as opposed to that it errored out, or that it matches a particular thing. So, you can
do all sorts of things here, but we wanna verify that it completes as we expect it to. K? So,
now let's run this. Alright, so that seems to have worked. That's great. We can change this,
of course. We can say that we get 20, for example, and that'll invalidate the test. That'll
prove the negative, basically. So, there ya go. K? So, that's one kind of test that's useful to do
in a reactive environment is to know about that schedule, because remember, behind the
scenes, all the code that we're doing is asynchronous. We have a schedule there behind the
scenes. We've looked briefly at that, but keep in mind, the consumer of this publisher, can
subscribe on a particular scheduler, right? So, schedulers dot whatever. What we're doing
here is we're providing a virtual scheduler that moves time forward at an advanced rate, so
that we don't have to wait for it.
+++++++++++++++++++++++++++++++++++++++

- [Instructor] Now, let's move up a stack, move up a layer, and let's suppose we have a web
endpoint. Let's go ahead and create a functional reactive endpoint, and the endpoint will
look like so, RouterFunction ServerResponse, and we're just gonna do simple greetings
input. We don't wanna do anything too fancy here. The goal isn't to dive into Spring
WebFlux. It's to dive into how to do test Spring WebFlux. So hi, new HandlerFunction
ServerResponse. We'll say return ServerResponse.ok.body Flux.just hello world
String.class. All right, and there we are. There's our simple functional reactive endpoint,
and now I wanna test that endpoint, the hi, and then it's got a HTTP 200 status code. Okay,
so now we can go to our test again, and we can use the WebTestClient. So I'm gonna create
a WebTestClient here, private WebTestClient, and we need to build that WebTestClient.
We'll do that in the before method here, in the setup method. Throws exception. All right,
there we are, and the WebTestClient, we'll use the builder to build it. So
WebTestClient.bindToApplicationContext. So we need to inject the Spring Framework
ApplicationContext like so, and we need to configure a client and give it a baseUrl. The
baseUrl in this case would be localhost 8080, and we'll finally build it. So build, and voila.
There's our WebTestClient. Now we can use it, WebTestClient, in a test to confirm in the
same way that we would the mock MVC test client in the Spring MVC world, we can confirm
things about the inputs. So public void getHi, getGreeting throws Exception, and here we
can use the test client very simply. Say WebTextClient.get.uri equals http, or even better
just /hi. That's what we have here, right, hi? And expect a status code after we've gotten the
result. So expect a status code that looks like that is okay, and I guess that's it. That's a very
simple one. We could do expectHeaders. We could expect the body to look like a certain
thing given a ParameterizedTypeReference, and we could do a insertions against that, but
for our purposes just as a simple example let's look at the status code there and make sure
that it's okay, all right? So now if we run this again, run the whole test, there we go. So that
works as well. So of course I can change that. I can say is 500, confirm the negative, and you
can see that it doesn't work. Okay, so with that we've looked at two things that you may
wanna be aware of when you work with the Reactive stack, the Spring web stack, the
Spring WebFlux Reactive web tier, and indeed with Reactor in general. There's a whole lot
more to be said about this, but we beginning to be scratching the surface. There's also
support using Spring Security for testing reactive Spring Security code as well. So there's a
number of integrations that you're gonna have to look out for and be aware of at every
layer, but what you should understand is that stuff is already thought of. We've already go
support for it in the different frameworks and so on. So you should definitely take a look.
Run this again. Very good.
+++++++++++++++++++++++++++++++++++++++

- Welcome to Lesson 4. In this lesson, we're gonna revisit the Reactive Streams
specification, focusing specifically on how we can use the Reactive Stream types as a
mechanism for interoperability across open source projects, across projects like Litebend's
Akka Streams, like the Vertex Project, like RxJava 2, and of course the various Spring
projects and support throughout the stream specification. In this lesson we're gonna look
specifically at the Lightbend Akka Streams project as a way to solve certain computation
problems in conjunction with Spring WebFlux and Spring Data Reactive MongoDB.

+++++++++++++++++++++++++++++++++++++++++++++
- In this lesson, we're going to revisit the reactive streams specification. We're gonna look
at it as a compatibility layer, a standard, across which different parties can inter-operate.
And we're going to do so today, in terms of both Spring and in Akka. So let's go ahead and
build a very simple example here, we're gonna build an example called the, let's say, Tweet-
service, alright? We're gonna use the Reactive web support, of course, we use Lombok, and
I think that's it, that's all we need from the Spring side. We generate that. We're gonna open
this up in our IDE, as usual. Cd downloads. Uao tweet-service. And we'll open this up in our
IDE, and then we're gonna add the dependencies that we need here. So the Reactive stream
specification, ultimately, is just a set of four different interfaces. These interfaces are
common enough, they serve as a foundation for different APIs. And they serve as a way for
different APIs to talk to each other. So, perhaps the API isn't based upon the reactive
stream specification, but it can vend and consume instances of types in the reactive stream
specification. So, here, we're gonna look at two different projects, that one of which is
founded on. That's the project reactor, and thus the support for reactive programming in
Spring and Reactor. And another project which can vend and consume that, namely Akka.
Now, let's add these types to the Class back here, so we're gonna use akka-stream-
testkit_2.1.1, version 2.5.2, and it's gonna be com.typesafe.akka. Alright, so we've got these
three types, these types on the class path, and let's add also the akka-stream core, as well.
So there we are. Now, akka-stream is a processing library similar to Reactor. It's a library
that allows you to operate on reactive pipelines. At the heart of Reactor, is its own idea of a
flow, but it can also be used to create publishers and subscribers, you can vend those types
and you can consume those types. So we're gonna build an application that's going to
manage data, and of course, I have forgotten to add spring-boot-starter mongodb-reactive,
we're gonna add that to the class path here. And we're gonna build an application that
manages data, that will persist into a mongodb persistence tier. So create some types. This
application that we're gonna build is modeled after an example in the akka-streams
documentation, so it's fairly similar but not quite exact. We're gonna create a type here, an
id, text field, an author type, so we create an Author, we need a class of type Author, so class
Author. And the Author itself is gonna have a private String id, @Id, alright? So there is our
basic type, but of course, this is Java so we need more than that, we need to annotate them
first, as documents that can be persisted in mongodb. And then we need getters and setters,
and all that kind of stuff. So we need data, @AllArgsConstructor, @NoArgsConstructor, all
of these basic types, all these basic things that we need in Java, we're gonna go ahead and
take care of that with Lombok, alright? And that gives us the basic skeleton of what we're
gonna do here. Now, this Tweet type is going to have an accessor for the hashes, the
hashtags, that represent this tweet. So we're gonna have public SetHashtag, and
getHashtags, maybe we should make this the hashtag. And we'll come back to the Author in
a second. The Author looks like this, it's a, it's gonna have a field called String handle, @Id,
alright. So @Data, @AllArgsConstructor, @NoArgsConstructor, and @Document, alright? So
there's the types there, and we're gonna have a method that's an accessor for the hashtags,
for this thing. And we're gonna say return Arrays.stream, I'm gonna take the text of the
tweet and I'm gonna split it into an Array, which we're then gonna filter, t, I'm gonna say, if
t.startsWith(), hashtag, I want to keep it. Then I'm gonna map the data that we get back, so
the word will be turned into a, new Hashtag. And the hashtag will be comprised of the data
in a text and we're gonna a regular expression to unpack the bits that match this regular
expression. So, replaceAll(), create a regular expression that looks for everything that starts
with an hashtag, and then is followed by a word, alright? So, there is the regular expression,
we want to replace that with a space, and we're gonna say toLowerCase(). Alright? And
then that, we collect everything here, into a Collection, into a Set, alright? There we go! So
there's our domain model, and now we need to actually build some data, some services that
actually work with that. So we're gonna create a service that processes this data, so let's see
here. @Service, class TweetService, and the TweetService is gonna take advantage of the
repository that stores this data, it's a mongodb repository here, that we're gonna create to
store the data in mongodb, and it's a reactive repository. We're gonna call it
TweetRepository extends ReactiveMongoRepository, Tweet, String. Alright. And with that,
we have part of what we need to be able to address the use cases supported by the service,
the service is going to do processing on the data, is going to have granular methods that
support business logic, and also just reading and kind of arriving at synthesized views of
the data. So we're gonna do that here, we're gonna say, I want to have an endpoint that will
give me all the tweets, so that's our method that will give me all the tweets. So, Publisher of
Tweet, getAllTweets, alright? I'm gonna say return this.tweetRepository, TweetRepository,
repository, and we'll say repository.findAll(), very good, there's that. And we want to have
another endpoint that will return all the hashtags from all the tweets. And so we need to do
a little bit of processing here. And here we can take advantage of akka-streams. So I'm
gonna say Publisher of Hashtag, getHashtags. Of course, keep in mind, the hashtags are part
of the tweet. Right? So we need to actually unpack that. Let's do that here, getHashtags,
that's a cleaner way to describe that type there. And then what we're gonna do, is we're
gonna create a method that uses akka-streams as I say, up here. We're gonna say return
Source, here, we're gonna use the javadsl here, so we're gonna say return Source,
.fromPublisher(), and the Publisher in this case is the tweets, all the tweets, and then we're
gonna map the data, so Tweet, getHashtags. And then from here, we're gonna reduce all of
the different tweets into a single thing, so we're gonna have a method that returns a joined
version of anything that it sees. So private, set, we'll say, private t, Set of t, join, and it'll be
Set input, input, alright? So we're gonna say that we want to do, let's say, Set of t, set equals new
HashSet, return set, okay? And then set.addAll(), and it'll be two different sets here, A and set B.
Alright, so there's our two different sets, and now we're gonna join them together, and then we're
gonna concatenate everything that we have here. So I'm gonna say, new Function, Set of
Hashtag, to a iterable of Hashtag. We're gonna create a iterable of Hashtags, give it a set of
Hashtags. Okay? That's fairly easy to do, so we can just actually return the input as output. It's an
identity call, really. This gets rewritten fairly easily, we can just say in is out, right? HashTags.
That's how it sets the contract. Now concat, we do want to keep the type information there, so
we're gonna say that it's a function of set of Hashtag. And then we're gonna run this whole thing,
with a sink, and we're gonna talk about a sink in a second, but let's see if that all works. Okay,
looks like that's okay. So, now we need some way to take this publisher, this pipeline rather, this
flow, and turn it into something that we can then process. We want a Publisher out of this, right?
So we're gonna give it, we're gonna use the concept of a Sink, so again, in akka-streams you've
got this concept of a Source, over here you're gonna have the concept of a Sink. So the Source is
the thing that originates the data, the Sink is the thing that accepts the incoming data, and does
something with it. Well in this case, I wanna distribute computation across Akka actors. Now,
Akka is a actor framework. Actors are a way of describing computation in a way that doesn't
require a lot of thinking about concurrency. It's based ultimately on the Erlang OTP model, the
model that allowed for systems to be very highly available, very, very, very scalable, by having a
supervisory hierarchy that babysits, basically, lots of possibly failure-prone processes, and
restarts them if anything should go wrong. This supervisory hierarchy means that systems built
in such a way have, you know, five, nine, six ninths, and so on, of availability.
++++++++++++++++++++++++++++++++++++++++++++++++++

- [Josh] We need to take advantage of Akka, but I don't have Akka in play right now. We're
using Akka streams, but we're not necessarily using Akka, right? This sync can be anything
you want it to be so we need to configure Akka and that's where we're gonna create our
little configuration class here. And we're going to create a few beans here that we need, we
need the actor system for Akka. So we turn ActorSystem.create, I'm gonna call this bootiful-
akka-stream Alright, there's this, that's configuration class. And I'll create a bean of type
ActorMaterializer. All right, so, return ActorMaterializer.create, and we'll say
this.actorSystem. Okay, there we go. That's gonna be two beans that we need. I'm gonna use
that ActorMaterializer in the service, so I'll say private ActorMaterializer. Add the
constructor arguments there, and we'll use that here, we'll say, Sink.asPublisher is gonna
be true. I'm gonna pass in this.actorMaterilizer, and there we are. That'll give us all the
hashtags, and with that in place, we can now create an endpoint that returns all of this data.
We can actually create a rest endpoint that will have the data that we wanna be able to look
at, so let's create a very simple Spring Webflux functional reactive endpoint here. And we'll
say class, how do I wanna do this? I guess we can just put it in as part of the bean, as part of
the application up here, it's just a bean after all, so RouterFunctionServerResponse routes
and I'll say return RouterFunctions.route RequestPredicates.GET/tweets new
HandlerFunction, and we're just gonna create a very simple endpoint, one that's gonna
handle the tweets endpoint. All right, so, ServerResponse.ok.body, and in order to do this,
we're gonna use the TweetService, naturally, so we'll say tweetService.getAllTweets
Tweet.class, and we can simplify this a good deal, of course. Use some static imports and
remove the, there you go, static imports again, so that's the tweets endpoint, and we're
gonna have another endpoint that handles hashtags. So let's do that. So server response
body.tweetService getAllHashTags. And of course this is a HashTag.class. All right. Okay, so
there we go. There's our different endpoints, tweets, and hashtags. Now finally we need to
have some data. This wouldn't be a very good example if we didn't have something to look
at, so let's go ahead and just create an application runner bean that'll write some data to
the database that we can use as an example here, so ApplicationRunner, runner, no just
producer, sure, and we're gonna use the TweetRepository to do this work. Tweet
repository. Return args and create that. All right, so now we've got a repository, and we're
gonna save some data into the database and we need to create a few authors, of course.
We'll create a few authors here, and we're gonna pay ohmage to our friends on the Akka
team over at Light Bend. This is Jonas who is the founder of Akka. Okay, so jboner, there are
Twitter handles of course. Viktor equals new author Viktorklang, the legend of Klang. And
my name is Josh, of course, so I'll just put myself in there as well, since I'm a I'm just about
out of names, there we are. Good, so we have now three different authors. We need to
create some tweets. So let's say flux.just and I'm gonna say Woot, Konrad will be talking
about #Enterprise #Integration done right #akka #alpakka and we're gonna use viktor,
okay? Okay, there's that one. And then we'll have strings here. So I'm gonna create a tweet
out of this instead. Viktor, very good. And then we'll have another one. This one will be new
Tweet scala implicits can easily be used to model capabilities but can they encode
obligations easily? Easily as in ergonomically? That's viktor. Okay, so this is me typing up
some tweets, so we have to make sure we get these right. So new Tweet This is so cool,
akka. All right, so there we go. Good, another one. And we want a new Tweet. Oh and by the
way, we need the, the Tweet object doesn't have a constructor without the ID, so let's go
ahead and create those to make this job a little easier. I'll say text and author, good. That'll
be the missing constructor. That's why this compiler's upset with me up here. So I'll now
create some more. I'm gonna have a cross data center replication of event sourced akka
actors soon available using CRDTs and more. Okay, and that's jonas. And let's see, new
Tweet. A reminder SpringBoot lets you pair-program with the Spring Team. That's yours
truly. And finally, new Tweet whatever your next platform is, don't build it yourself.
Whatever your next platform is, don't build it yourself. Even companies with the end
motivation to do it fail a lot. Okay, josh, right, very good. So there's our publisher of tweets.
All right so there's a runner. Publisher of tweets is this. And then finally, we need to save all
that. So we're gonna say tweet repository or repository .deleteAll .thenMany, and we're
gonna say tweet repository.saveAll, passing in the tweet flux. We're gonna say thenMany
and find everything. So repository.findAll and then we're gonna visit all the results that
come back. We're gonna say why don't you print it out. So System out colon print line. All
right, so there we go. There's the logic that will run and start up the application and write
the data to the database. So that's good. It looks like we're on the right track here. Now
finally, we want to actually see it all work. So let's go ahead and start the application and
see what we get. First of all, it's up and running, good. Localhost8080/tweets. Right, there's
all the tweets. Hashtags, there's all the hashtags deduped and everything. So that
processing was done for us by Akka. Now let's review. Now that we've kind of got
everything in place there's a lot of stuff happening here. We wanna make sure that we're on
the same page. So we've got the basic types here that we're saving in the database, one of
which has the ability to give us all of its own hashtags, just its own. It's gonna unpack the
texts. It's gonna parse it and find the hashtags. We also have a repository that's reactive and
it's MongoDB aware so it'll write to the database and so on. And then we have a service that
gives us all the tweets and all the hashtags regardless of where the hashtags, and to which
tweet these hashtags belong. To do that it gets all the tweets and then it unpacks each one
of those and gets all the hashtags, so basically at this point we have a whole bunch of
publishers or a whole bunch of collections of hashtags and then we reduce it because we
have, as I say, we have a publisher full of collections of hashtags. So we reduce it to that. We
take each set, set A and set B, and merge into one set. So finally we have one big collection,
and that's what we have leftover. And then finally we're gonna take all of that and then turn
it into a sink. We're gonna use Akka to do that. Now Akka is a actor system. In this case I'm
using a local AkkaMaterializer, this is using a local actor system and a AkkaMaterializer. But
actors can be distributed. There's no reason that this work couldn't act kind of like a tubal
space or an actor system, distributed like Erlang style actor system, there's no reason that
this code, which is fundamentally asynchronous and nonblocking, there's on reason that
that work couldn't have been done across thousands of Akkas in a cluster, doing
distributed computation. So in this case I'm doing the computation in memory, right? I'm
using a local actorMaterializer. The best thing about actors, although this case we're not
really dealing with them, the nice thing about actors is that they give you a programming
style that if you comply with that style, guarantee certain results. It's a way of modeling
systems that looks sort of like a message box. You have messages that could put in and you
could take those messages out. You don't have shared state, though. You don't have
different actors talking to the same synchronized variable, for example, or at least you
shouldn't. And if you write your code in that way, as a bunch of things that send messages
to each other and deposit messages into each others mailboxes, which those actors can
then process as they're able to, then you can write to some that scale out and also are
efficient. So this actorMaterializer is going to hydrate, if you will, our Akka streams flow
and turn it into processing that runs in terms of these actors, but it's all gonna be in the
local JVM. But again, as I say, it's worth stressing that because it's Akka it's just a matter of
configuration that these actors be distributed or not, all right? The code is the same
basically, save for the configuration for the actorMaterializer. We are doing this to
demonstrate that you can have a Spring Webflux application that talks to MongoDB using
Spring and still take advantage of Akka streams project and the other things that build
upon that including Alpakka, which is their integration framework and Akka itself and so
on. So there's a lot of benefits by having this base type, this common type that's accessible
from all these different products. Suddenly siloed communities that may not have been
accessible before are not easy to get to. You can now write code that works and takes the
best of breed components from different communities. So you could also write code that
uses, for example, Vertex, or you could write codes that use RxJava2, right? And there's no
reason these things couldn't interoperate. All right, with that we've looked at a reactive
spring specification as a means to integrate different projects. Obviously the reactive spring
specification inspired the Java 9 support. So Java 9 has javautilconcurrentflow.publisher,
javautilconcurrentflow.subscriber, javautilconcurrentflow.processor,
javautilconcurrentflow.subscription. These four types are mirror images of the same types
in the reactive spring specification, and in the same way they give you interoperability. Of
course, that all depends, all hinges upon, all hinges upon people embracing Java 9, and now
with Java 10, already out there and Java 11 very close to being out there, we're in a good
place. So hopefully that will be a well-entrenched option for a lot of developers in the near
future.

You might also like