Bringing a human perspective to data integration, mapping and AI

Listen to this episode:

About this episode:

In this episode, Matthew Stibbe interviews Markus Kolic from Sun Life. They dive into the rapidly evolving world of AI and data engineering, exploring both the hype and the hands-on realities within large, established organizations. The conversation begins with a look at the extraordinary possibilities around generative AI, while also acknowledging the current overexcitement and uncertainty. Marcus discusses how true, long-term value will come from careful experimentation—often by small, innovative teams—rather than flashy marketing claims.

He then shifts to the challenges of integrating complex legacy systems in enterprise environments. The “Strangler pattern” is highlighted as a practical approach to gradually replace aging infrastructures with modern web-based applications. Marcus stresses that success depends on well-structured data models, careful mapping, and documentation that evolves over time. Tools like CloverDX and version control platforms such as GitHub bring agility, transparency, and accountability to data transformations.

Ultimately, the conversation emphasizes that data engineering is about people and understanding their needs. Despite the complexity and chaos of new technologies, the end goal remains to deliver meaningful, human-centred information that genuinely improves processes and experiences.

AI-generated transcript

Matthew Stibbe (Articulate)   0:06

Well, hello everybody.

Welcome to behind the data with CloverdX.

I'm your host, Matthew Stibber, and today I'm talking to Marcus Kolic, who is Associate Director of Engineering at Sun Life US.

Great to have you on the show, Marcus.

Markus Kolic   0:20 Hey, I'm happy to be here. Matthew.

Matthew Stibbe (Articulate)   0:23 Before we start diving into your world of data and and the projects you're working on, I'd like to start with a sort of broader question. What's going on in the world at large, particularly around AI, that's catching your attention at the moment?

Markus Kolic   0:40 Well, it's fun that you mentioned AI. That's a very, very easy direction to take this question because we're at a moment of incredible possibility right now. And what I find really interesting is that there's this. There's two directions for looking at AI. Currently, there's the big public AI hype cycle that is currently happening you. Know everybody is excitedly using ChatGPT and it's being sold out the wazoo and I tend to think that much of this is very, very over hyped. You know it. Editron's writing is really interesting about this. But the actual technology. Exists that power generative AI is incredible, and the actual experimentation happening with it outside of the big public large language models is fascinating and strange. And still in its infancy, and I think everyday there are new projects and new use cases appearing, both ones that are being fur. Made-up by large corporations who want to justify their AI investments, but also curious, interesting experiments being done by weirdos and outsiders and academics and hobbyist. People in basements, some of which are astounding and I think the use case for how these technologies are used for data integration, data engineering, enterprise data operations, is not at all understood yet. We are still on day zero of this and I think you know a lot of people are leaping several steps ahead in the process already and thinking about, well, can I use this for this?

Matthew Stibbe (Articulate)   2:03 Mm hmm.

Markus Kolic   2:10 Can I use this for that? How can I shoehorn it in? You know, when the real. Killer app. You know when the real use case makes itself clear, it will prove itself out. In practice, that's not going to be invented by marketing people. That's going to appear over time and make itself known.

Matthew Stibbe (Articulate)   2:22 Yeah.

Markus Kolic   2:25 So I think everybody needs to sort of take a breath, you know, especially those of us who work at, you know, large institutionally conservative organizations. These things are not happening overnight, despite the best efforts of every venture capitalist and and investor in Silicon Valley. But they're going to. Go in interesting and weird directions and the best thing to do right now is keep an open eye and look for curious things.

Matthew Stibbe (Articulate)   2:48 An open eye for open AI. Somebody asked Chairman Mao once what he thought the consequences of the French Revolution were, he said. It's much too early to tell. I think we're in a similar state with this, this AI thing, and I see an awful lot of share price engineering, right? People just putting AI into something to boost the share price, whether it's added value or not.

Markus Kolic   3:08 Yes.

Matthew Stibbe (Articulate)   3:12 So when you're thinking about data engineering and how companies and teams can experiment with it. But deal with the rapid pace of change. What's what's your best sort of advice about how how to make that work?

Markus Kolic   3:26 Well, the basic challenges in an organization in a business that is regulated and profit driven is that generative AI by definition, involves some unpredictability. You know it's a non deterministic system. You could put the same input into it twice and you get 2 different results. That explodes the basic assumptions that we have about how software works and how enterprise software works because enterprises like predictability.

Matthew Stibbe (Articulate)   3:52 Yes.

Markus Kolic   3:52 Enterprises like to know exactly what they're going to get and exactly what it's going to cost generative AI. Definitionally, does not do that. It introduces chaos instead. Now, that's not a reason not to use it. That means you have to understand the chaos and then from an organizational perspective, that means you need room for experimentation. You need room for iteration and you need room for failure. The way to succeed with AI is going to be take a few faintly insane people and put them in a room and let them experiment for a year. Then you have to accept that. Some chaos, some rapid change, is going to be the result of that and you have to be tolerant of this. Able to pick out the good parts and able to quickly correct when you hit the bad parts which inevitably you will. There are going to be cases where AI backfires and unpredictable. Strange and upsetting ways. So the point is future proofing. The point is to be ready for these unpredictable outcomes and able to, as an engineering team, quickly pivot quickly. Reorient. You know, rapid deployment becomes important here. If you have. If you're at an organization where it takes a week or two weeks or a month to get through. Change management process to ship a new piece of code. You will not survive in the world of AI driven engineering. It's just not realistic.

Matthew Stibbe (Articulate)   5:06 I think there's a bit actually that is around corporations attitude to risk as well in the sense that by analogy, if you've got driverless cars, there is a strong probability that when that technology becomes more widespread, it will generally reduce the number of Rd. accidents.

Markus Kolic   5:12 Mm hmm.

Matthew Stibbe (Articulate)   5:27 But the first time. Well, it's probably already happened, hasn't it? But the first time. A car hit somebody in your family. It's a shock and a horror, and there'll be lawsuits. Right. So we're we're sort of societally happy with the level of accidents on the road because everyone has individual control over the vehicle they're driving.

Markus Kolic   5:38 Hey. Mm hmm.

Matthew Stibbe (Articulate)   5:49 And it when you put AI in the driving seat, everyone's really uncomfortable about that because suddenly you know who's who's responsible. Well, it's not the driver anymore. It's somebody else and I I I.

Markus Kolic   5:58Right.

Matthew Stibbe (Articulate)   6:00 'S going to be in a kind of corporate world that's also going to be a thing that when some AI driven data pipeline just spits out. The wrong thing and somebody gets billed 10,000 lbs for the wrong stuff. I don't know if that's. Anyway I'm interviewing myself.

Markus Kolic   6:14 Yes. No, but you've I think you've hit on a very important tension there, which is around accountability, you know, and I'm thinking of the old the slide from the management presentation at at T in the 70s that goes around on Twitter now and then it says a comput.

Matthew Stibbe (Articulate)   6:15 So let's let's let's let's move on.

Markus Kolic   6:30 Can never be held accountable. Therefore, a computer must never make a management decision, right? And the idea there is that people dislike the lack of control. You know, so if I'm the driver of my car, if I crashed it, it's my fault. I'm accountable for this, and when people feel like. The world at large is not accountable. That is frightening. Now the category here is that we already desperately lack accountability everywhere. The larger a corporation gets, the more accountability thinks it forms. And actually I have the book here. There's a wonderful book about this that I recommend to everybody. The unaccountability machine which came out this year. Dan Davies wrote this fascinating book, and the premise is that. Bureaucracies and large organizations, which are the engine that runs much of the world we live in today. As they grow, they naturally push accountability away from individuals and onto systems, and that creates these sinks where bad outcomes can occur and nobody's really responsible. Nobody's really in control. It's just the system and the system is responding according to some set of incentives and inputs and outputs that are not really understood by anybody. This is already happening today. Everywhere you know, AI is. Really, just a manifestation of that in miniature. You know, a large organization. I can't really explain why a decision was made by, you know, say, an insurance company, right?

Matthew Stibbe (Articulate)   7:50 Hmm.

Markus Kolic   7:52 I work for an insurance company. Thousands of people were involved in any given outcome. Any given decision that this company makes, you know where is the accountability there. What is it functionally? Is an AI running a whole bunch of processes in the background? That simulate human judgement or a large group of people running a bunch of processes in a company that involve their own human judgement and it reaches an outcome that no one person created. I'm not sure those two things are so different.

Matthew Stibbe (Articulate)   8:17 Yeah, that's a really interesting thought, isn't it? It reminds me of a thing I heard that the management systems or business systems are perfectly optimised to get the results that they deliver, right? And, you know, running a business as I do, you're constantly thinking, well, I want, I want better results. I want different results, but I've got a system that's perfectly optimized to give me the ones I'm getting. And how do you move that forward and you know, well, I'm. I'm before we move on, I will disclose a thing. I am a techno optimist. You know, I'm. I personally welcome our eight new AI overlords when they're ready, I want to live in AE and M banks. World where you know all watched over by machines of loving grace. That's that's my dream. But I probably won't live to see it anyway. Let's let's let's move on. You we talked a few weeks ago preparing for this and you used a phrase, the Strangler model, and it stuck in my brain. So tell me, what is the strangler model? What? What? What does that mean?

Markus Kolic   9:22Yes, the strangler pattern. And I'm going to Google this.

Matthew Stibbe (Articulate)   9:24Pattern.

Markus Kolic   9:25So I can give you the proper citation. For the person who came up with this, but the Strangler architecture pattern, named after I believe the strangler fig, I think, which is a type of tree and.

Matthew Stibbe (Articulate)   9:37 Right. Not the rock band.

Markus Kolic   9:40 No, and the premise is in large software systems, you know you if you have, as all of us who work at large businesses do you have legacy systems, right? You may have a patchwork of legacy systems. You have existing software. That was developed that is in need of replacement, but doing that at large scale is very, very, very hard. If you have, say, an ERP system that serves a vast manufacturing organization, you can't just unplug it and plug in the new one, especially thinking about how interconnected all. These things are and in the world of data engineering, how much exists downstream. That depends on the data that comes from these source systems. So Hal, if you need to take something that was built in, say the 1980s or 90s of this scale, how can you bring it into the 21st century without breaking everything and the strangler pattern is a model for moving these things gradually, right? So if you. Take your existing system diagram that out and you find which piece can I strangle first? You know you want to grab one little piece at a time, you know, Rewrite 1 portion in a microservice. In whatever your new system is, and you link them together using APIs, right? You have to. Instead of replacing your whole system outright, you go piece by piece by piece. And there's some contract in the middle between the old World and the new World that lets them interchange data with each other, right? And over time, you take 1 chunk and then another chunk and then another chunk, and gradually your system moves, your users move your expectations move. And this can happen over the space of years, but in a large organization. That's really the only sensible way you can do it. You know? Otherwise, as we all know, projects to rewrite and replace like. Systems are constantly failing. They run massively over budget and they are rejected doing it incrementally with experimentation and the ability to change your strategy part way through is much more practical.

Matthew Stibbe (Articulate)   11:34 And what are the I can I can sense that? I mean I've tried building big software projects from a with a top down sort of model and I've tried more of an agile approach. I didn't know it was called a strangler pattern, but what are the downsides or the risks when you're trying to do that with these big legacy projects? If you try and tackle them incrementally?

Markus Kolic   11:55 Well, the trouble is if you get stuck or if you get distracted, or in the case of a corporation, say you lose funding, right? And inherently when you're following this kind of pattern, you are making compromises and temporary workarounds along the way. And there's the old truism from corporate software development. There is nothing so permanent as a temporary solution, right? Because the one that was quickly made-up because it's practical is there because it's practical and practical, things survive. So it's very likely when you're in this kind of situation, if you don't. Of engineering leadership and business leadership that understands where you're trying to go, the long term goal of the project and what you need, you stop halfway through and now you've made the problem worse instead of better.

Matthew Stibbe (Articulate)   12:37 Because your heart, you're down in the valley between two mountains, right?

Markus Kolic   12:38 The old comic, yeah. Yeah, exactly. And you know, now you've just, I'm thinking of the old xkcd comic that says, hey, there are 21 competing standards for this. We need to all get together and make up a new standard. Six months later, there are 22 competing standards.

Matthew Stibbe (Articulate)   12:54 I love XKCD. Anyone listening to this xkcd. com is is. Yeah, the unacknowledged legislator of our world. OK, so. And you've told me that. The sort of projects you're working on now are are moving towards or supporting delivering web applications and web-based delivery and integrating and progressively and migrating over legacy systems to that. So can you give me some examples of of some of those web apps? I mean, where are you headed to? What's the? What's the the bright new world look like to you?

Markus Kolic   13:31 So where I work Sun Life, which we haven't talked about at all yet, I don't think my employer, Sun Life, is a large provider of employee benefits in the United States. Our Canadian parent company provides all kinds of financial services. It's a pillar of the Canadian financial system. Ours is pretty specifically here in my unit focused on employee benefit insurance. So dental insurance, life, disability absence and leave management, things of that nature. And the products that our team delivers are management applications. For the consumers of that insurance, our clients, which includes the brokers who sell that insurance to their employer clients, the employers themselves who offer that insurance to their members and the Members who actually use that insurance, who need to go and get their dental care or you know. Need to get their disability leave covered for instance. And many of us have had pretty bad experiences in life dealing with our insurers. These systems can be bureaucratic and sclerotic and difficult. And the Internet has not generally been a first class citizen in that process. You know, much of the American Medical establishment still relies on fax machines all over the place because they're secure and they're auditable. And HIPAA has very strict compliance rules about that bringing this world into the 21st century is quite difficult. At Sun Life, we have an additional complexity, which is that we tend to grow by acquisition. Sun Life has acquired many companies in the US over the recent decades, one of which was. Bypass there? Maxwell health. We were acquired in 2018 and integrated our system into their. In order to get lots of legacy systems, lots of acquired systems, different worlds into a usable place for the modern Internet, you cannot rely on a disparate series of databases. It's just. It's too hard. You have too many old applications and old assumptions which may contradict each other. So the exercise that we're undertaking, and this is a large multi year project done by my teams and many other teams that we're working with, Sun Life is to build a proper domain driven data model. Of all of our web facing client data. So irrespective of the source applications that might serve things like enrollment and billing, the guts of the insurance operation, you want to have a data store that cleans and organizes that information in a way that is accountable not to the software but to our human understanding of the. Data. So when I say domain driven design we mean. The domain of what is an insurance policy. Right. What is an insurance enrollment? What is a member? What is a dependent you know, and we've had to make a clean model written according to our best judgment and our experience that defines this data canonically. Then you can map your source system data into that store.

Matthew Stibbe (Articulate)   16:26 So you've got a sort of the mapping is you've got a mental framework or documented framework of what is a policy, and then when you get some replying this back to you. Because I'm a I'm a dummy with this and I just want to make sure I've underst.

Markus Kolic   16:31 Mm hmm.

Matthew Stibbe (Articulate)   16:39 It you get you acquire a new business or you get a new source of data and you can take whatever they've got in whatever system you go that looks like a patient record or that looks like a policy holder record. And we need this sort of information and. I'm mapping it onto my. Platform. Data structures.

Markus Kolic   17:00 Yes, platonic. I like that way of thinking about it. Because that's exactly what it is. It's the principle comes first. Now, in practice, that's really, really, really, really hard to do. You know every source application, any, any data system that has built under its own steam ends up embedding its own assumptions about what that data means and how it works. It's there even in the structure of a database. You know, you might say the the relationship, for instance, that an insurance enrollment has with time. When does it start? When does it end? That may be defined one way in another system and a completely different way in a different system. So there's a mental challenge in the data mapping in figuring out where do these assumptions and we might call them business logic, which I think is sort of a diminishing term for, you know, cognitive assumptions made by an application about what its data is like. And we're trying. To translate them into a shared language. You know that is inherently really difficult, especially because in. Older systems. Often the people who built it may be long gone. You may have relatively small amounts of people and small amounts of understanding of what's going on in that database, so you have to not only are you trying to translate concepts from 1 system into another, you are trying to figure out what those concepts were in the first. Place you know, there's there's archaeology. Involved in all of this as well. So we found you have to invest a lot of time and effort up front in understanding that data for that process to go smoothly.

Matthew Stibbe (Articulate)   18:39 Phrase archaeology, by the way, one of our previous guests in their spare time did data management for archaeological projects, but I've never thought about it. Is like you've got to go and look at the data with almost a historical or an archaeological perspective, but. All of this sounds like a very human endeavor. It's human beings trying to understand human beings, who made decisions about data structures and assumptions or whatever. How when you're thinking about this mapping in these sort of canonical structures, how are you documenting that? I mean, what's the mechanism for capturing your assumptions for future generations?

Markus Kolic   19:21Well I it it's. I'm glad you mentioned that actually, because we've made some advances in this where on my team and and I'll give some credit here to heroic engineer that is a team lead. I work with Kurt, who has built out a good solution for this, because when we began this particular project, we were tracking this information in a great big spreadsheet. Real simple and I think one of your your previous interviews here that I looked at had somebody who said. I believe that if you have made a spreadsheet, you have already made a mistake.

Matthew Stibbe (Articulate)   19:49 That well, this is. This is my Clover colleague Pavel Najvar opinion. Yeah, if if you're doing it in Excel, you failed basically. There's much truth in that.

Markus Kolic   19:59 Which I categorically disagree with. However, there is some truth here. I'm very pro spreadsheet. They're dear to my heart and the spreadsheet is a foundational organizing principle of modern business that was revolutionary to it. In the same way that, say, the filing cabinet was and that, you know, potentially a future AI applications might be.

Matthew Stibbe (Articulate)   20:11 Yes.

Markus Kolic   20:18 So I I I don't. You know, I don't want to lose the spreadsheet just yet, however. It was not practical for tracking these data mappings at scale, certainly, and certainly not for managing change to them, because what we realized fairly quickly is that data archaeology and data mapping is not static, right? Right, you can't do this exercise once and say OK, great, now I understand such and such system.

Matthew Stibbe (Articulate)   20:37 Mm hmm.

Markus Kolic   20:41 I'm moving on to the next one. As you move on to the next one or as you expand your application as you move on to new requirements, new areas you make discoveries that affect the existing ones. So your data mapping needs to be a living document. It needs to be something that can be collaborated on and shared by everybody, and it needs to be organized in some way that can be maintained. And parsed and analyzed. So we ended up moving the whole thing into. A GitHub project entirely dedicated to the metadata that defines our mappings. This is not used programmatically. We are defining it manually in Clover and writing our own SQL queries, but we're documenting it in an organized, versioned way where we actually store these mappings in readable YAML. So it is A tag that says source Field X maps to destination Y and here is a condition right? And here are some notes and when anybody working with our teams wants to update this. You edit the YAML and you open a pull request and we can see exactly what is changed and when and by whom. And we can export that information up into readable documentation for end users. So we have a process that takes the current state of this repository of all of our metadata, crunches it, and outputs something readable that can go into an FAQ document that our business users can use. And this lives alongside our API definitions which are generated in a similar. Way. So the idea is. Keep the real information, concrete and versionable. That's the really, really essential thing here. You don't want to have a bunch of documents. Can drift. You want to have one clear thing that said, one value on this day and another value on this day, and we found GitHub to be the perfect way to orchestrate that.

Matthew Stibbe (Articulate)   22:22 That's fascinating, I because a spreadsheet has a sort of imposes a a structure, a grid, two on two dimensions.

Markus Kolic   22:29 Hmm.

Matthew Stibbe (Articulate)   22:31 And this is the danger of doing live interviews with, with builders and spouses outside. So if anyone is hearing noises, apologies for that. GitHub any kind of version control has a sort of another dimension. It's a dimension of of increments of change over time, and that's sort of missing from Excel.

Markus Kolic   22:52 Hmm.

Matthew Stibbe (Articulate)   22:55 It's one of the reasons why. I mean, don't get me wrong, I live in Excel too, but it's one of the reasons why it has deficits for doing some of these things. And yamola my memory serves me as a sort of has some programming language sort of structure to it. So there's some rigor about how you've defined things. Is that? Is that right?

Markus Kolic   23:13 Yes, yes, exactly. You know you have. You can have nested fields under one another, you know, think of it as expanding trees of information according to defined patterns and tags you know. So all of that. It could be pretty easily read by a person you know. This is not like a big XML file with a bunch of repeated labels and things that it's hard to scan. But it can also be parsed really, really simply by a script.

Markus Kolic   23:39 That's the beauty of YAML specifically.

Matthew Stibbe (Articulate)   23:42 So looping back to our conversation about AI, if this mad idea occurs to me, if you unleashed a machine learning or sort of a large language model on your YAML files.

Matthew Stibbe (Articulate)   23:57 And they could read that. Could that be a way of go? You know, interrogating that data in an interesting way?

Markus Kolic   24:06 Yes, absolutely it could. And there's one of the most interesting uses and potentials of generative AI is to take a document and make it alive and be able to speak to it and ask it questions. You know which you certainly could do. You could feed that mapping in and get all manner of explanations and suggestions out of it. You could use AI to extend it and say all right, given this pattern, you know, here's our for instance our. Brokers tell me what you think of brokerage firm might look like. Make an educated guess. You know, there's a lot of possibility for research and experimentation there.

Matthew Stibbe (Articulate)   24:42 You could turn the telescope the other way around and say, given what we have learned and documented in this rigorous, you know rigorously structured way about acquiring all this data. And with all this patterns, here's some new data that we think is like this. Can you? Can you do the historical archaeological analysis on this and give us some ideas about what might be going on here?

Markus Kolic   24:58 Hmm. Right, you absolutely could. And in order to get that, the practical thing that you need is a centralized place to have that information. You know, we have one of the teams here at Sun Life has done this magnificent thing. They use a tool called Calibra, which is essentially a big library, and they've collected every set of DDL you know, database definitions that we have at the organization. It's a massive, massive, massive set. We're talking thousands of databases. Hundreds of thousands of fields and tables. Is searched and indexed and not all of it is tagged. That process is ongoing of learning. All right. What is all of this information? Again, archaeology. But the fundament are there? I can go in and type in a business term and see, OK. Here's the six databases we have in which this term appears, and AI using that information could connect that to the human created mappings that we have pretty efficiently. And connect the patterns you know and we've we've only just begun to think about what's possible there.

Matthew Stibbe (Articulate)   26:07 And where does Clover fit into this? This maelstrom of data and databases and schemas that you have?

Markus Kolic   26:13 Yeah. Well, Clover has been completely integral to the way we navigate it and I I think for me, you know I I'm quite proud of it. I've been working with Clover for it'll be 10 years next year. Yeah, back. You know, before we were acquired at Maxwell Health, Clover came in initially as sort of well, they had me. I was trained as an ETL developer. Maybe we can bring in a cheap ETL tool and see if it does something useful, which it turned out that it did. We wrote a bunch of our existing integrations in it. And I have built much of my career on Clover specifically because unlike a lot of the ETL tools that were available at the time. Notice a pattern here? Everything that Clover does did was versionable. Everything that you build in Clover is saved as an XML file under the hood and we can track it in GitHub.

Matthew Stibbe (Articulate)   27:00 Yeah.

Markus Kolic   27:05 It made iteration incredibly easy. Our team using Clover as its platform for all of our data integration can get a request for, you know, some new piece of data. Piece of information. Build it almost immediately in Clover you know, just drag a couple of components onto a screen, define it, open a pull request, get it reviewed, merge it, test it, deploy it in the space of maybe a couple hours, which in a corporate context is astounding. You don't usually.

Matthew Stibbe (Articulate)   27:33 Yeah.

Markus Kolic   27:33 Get this kind of agility. But Clover, you know, proper scalable enterprise grade ETL tool fit into. The really, really flexible and like deployment patterns of the little startup that we had and we have been able to scale it since being acquired since growing into the full scope of what Sun Life does. We've been able to scale it up while maintaining that level of agility, and that has been. Absolute game changer for everything we have worked on is the fact that we can just throw something in quickly and iterate and pivot as we learn and see what it's doing without as I was saying earlier, these long cycles of change management, you know, Clover has been. A straight line into the future for us.

Matthew Stibbe (Articulate)   28:19 Oh well, they'd then love to hear that. Thank you, but.

Markus Kolic   28:23 I'm not here to market, I swear.

Matthew Stibbe (Articulate)   28:24 I no. Indirectly, I suppose I am a bit, but it's it's nice when that when the software does what it says on the tin I and it's interesting. This sort of ties together quite a lot of the things that we started out talking about, which is being able to be agile move quickly. I think there's something about GitHub or version control. Which is also talking to accountability. You know, there's a name of a person who checks in and checks out the code and runs forks and things like that. And it brings a little bit of rigour to it. So we're almost out of time. Keep talking for a long time, but as I record this, this is the closing on the last working day of the year for me. So it's it's between you and Christmas now.

Markus Kolic   29:13 Hmm.

Matthew Stibbe (Articulate)   29:15 But as we as we bring this to a close what, what one piece? Advice. Would you give anybody who's listening who is tackling some big? Strangler type problem? What have you learned that would be useful.

Markus Kolic   29:32 Well, the most interesting thing that I have learned lately and and I'm glad we're mentioning this as we come right up towards Christmas because I just last night I was out with my fiance at the Commonwealth Shakespeare Company's production of A Christmas Carol here in Boston and.

Markus Kolic   29:47 They have done the most beautiful staging of it. It's it's really extraordinary. Anybody in New England should come and see this when they get a chance. And Scrooge learns the lesson over the course of A Christmas Carol that what really matters in his life is not the inputs and outputs of his counting house. It is the people that come in and out of it and the people that he should be caring about. That's what his money is for right now. What I link this to is something you mentioned earlier in in this interview, which is that we are talking about a human experience here. It's easy for us to say, well, it's data. Data in, data out. Our systems do this. Our systems do that. All of these things are people from the beginning, you know Mitt Romney of all people, put this very well. Corporations are people. My friend is is quote in 2012 that he was justly ridiculed for. But he wasn't wrong, which is that all of the systems that we build, all the of the software that we have, is here to serve the people.

Markus Kolic   30:41 It was made by people for people. These software are tools and for me I'm a newly minted manager at Sun Life. I've been in this role for maybe three months now and what I'm learning is that everything that happens at a corporation, even your vastly complicated data integration projects, are. About the people. Actual relationships between humans and the data only matters insofar as it is comprehensible and useful to the people. That's when it becomes information. Data is not information until it can be used and understood by a human. So I don't lose sight of the fact that our job as data engineers is to turn that data information into information and in order to do that from the beginning, you need to understand the people that it is from the people that are handling it and the people. That it is for you need. Human connections. You need to be in a room with those people. You need to be making eye contact and understanding them intuitively and you both politically in a corporation and systemically in your data. Uniting the human needs in your way of thinking about it is the only way you will succeed.

Matthew Stibbe (Articulate)   31:50 I love that humanistic take on it. I think that's wonderful. I think so. As we head into Christmas, let's remember that the data works for us. We don't work for the data. And I think on that note that brings this episode to a close. Thank you very much, Marcus. It's been a delight talking to you and everyone listening at home. If you would like to get more practical insights about data and data integration or learn more about Clover DX, please visit cloverdx.com. Behind the data. Thank you very much for listening. Thank you, Marcus and. Goodbye.

Markus Kolic   32:23 Thank you. Merry Christmas. Happy holidays.

Matthew Stibbe (Articulate)   32:25 Merry Christmas, you too.

Share

Download and listen on other platforms

Subscribe on your favorite podcast platform and follow us on social to keep up with the latest episodes.

Upcoming episodes

Get notified about upcoming episodes

Our podcast takes you inside the world of data management through engaging, commute-length interviews with some of the field’s most inspiring figures. Each episode explores the stories and challenges behind innovative data solutions, featuring insights and lessons from industry pioneers and thought leaders.