Down time

College is designed to teach you a bunch of stuff, but one of the biggest lessons comes from “finals week”. This part of the semester also includes the week before finals, when nearly every goshdarn class seems to have a big project due (like they are the only class that you are taking). The double-whammy of big project + cramming & final exams, is a pinnacle. These two weeks are grueling and really test your mettle. You work your butt-off and spend seemingly endless amounts of time, studying and obsessing about your classes, all so you can earn the best grade possible.

The good news is that this process teaches you a very valuable skill. In your real job (and career, overall) you will periodically experience intense periods, just like finals week. Hopefully, it doesn’t happen more than 2-3 times per year.

In college, after finals, there is always a week of downtime. Students are supposed to use this time to catch up on everything that was neglected during the previous two weeks, and it is a chance to re-charge. It is also a period for reflection on how the previous semester went, and to contemplate what could be done next semester to improve things. I suppose there is even some time to celebrate your accomplishments and share the experience with your colleagues.

In your career (after college), you don’t usually get down-time like you did in college. It is the job of management to make sure that your workload does not consist of peaks and valleys, like you had in college. Those are not good for people and it can impact morale. However, if you are a programmer, this is naturally going to happen anyway, because it is part of the lifecycle of some software development processes (methodologies).

If you ever get some downtime, you DO NOT want to treat them the same way that you did in college.

Things to not do: (aka “Bad”)

Sleeping-in, partying, showing-up late, leaving early – Don’t even think about it. When there is down-time at work, management will be watching you closer than ever. There is a saying “idle hands are the devil’s workshop”. This is a test to see who is responsible and who is unable to manage themselves and behave consistently like an adult. You will not be warned about this. If you fail this test, you will be blind-sided weeks or months later. For some strange reason, all of the good work that you have done won’t seem to matter. You will be passed-over without any explanation. If this has ever happened to you, then please commit this warning to memory: Don’t ever think that nobody is looking. You are always being evaluated. Always. These are the moments where you will stand-out in a good or bad way.

Things to do: (aka “Good”)

Tech debt resolution – “You got time to lean, you got time to clean.” We all know that every dev project will contain some technical debt. Now is the time to assess it and catalog it. Make a wish list and prioritize it. Time is going to fly-by, so don’t put it off, and don’t be informal about it. Get management involved so they understand, or they might mistakenly think that this is some kind of boondoggle, or maybe it’s just “for the cool points” or something. Take it seriously, and help them take it seriously too.

Brainstorming – Now that the project is being used by real people, folks are going to become more-aware of what works and what is just awkward or downright frustrating. Start seeking feedback and start developing a list of suggested improvements.

Log monitoring – You are not a hack, so your system has some tracing and logging mechanisms. If anything goes wrong, you need to keep your nose in those logs so you know it before anyone else does. Watch PerfMon like a hawk. Find the next bottle-neck or two and be ready to fix any flaws promptly.

Process improvement – In every project there will be some good times and some bad. Now is the time to assess that while it is all fresh in your minds. Write it down and contemplate it. Just like the lull between semesters, you can use this time to figure out how to begin and execute the next project or phase better than the previous. Find root-causes, propose process changes, research new technologies and toolsets that could improve your next project (or phase/sprint).

Utility / internal / support apps – Once you start supporting your app, you start realizing that it would be easier if your app had better logging, reporting, more-granular tracing, better info, etc. It can be pretty satisfying to write an app for yourself (in a day or two) that makes your (or your whole team’s) job simpler.

Learning / self-improvement – Be careful about this one. Most managers will expect that you are already doing this at home, “on your time”. So if you are doing your homework at work, you are one-step-short-of doing nothing, or slacking, etc. It might seem like it is better than nothing, but there are probably things that you could be doing to bring the “wow” instead of just not-sleeping at your desk.

There are more things, but I think you get the picture. This is what leet looks like.

Advertisements
Posted in Career, IT Psychology, Lessons Learned, Methodology, Professionalism, Team | Tagged , , | Leave a comment

Testers, bugs and the end-of-the-world

*** Disclaimer: this is not about my current project. It is about all projects, everywhere. ***

If you are a developer, and you have worked much with professional software testers, you will love & hate them.

You will love them because they can do something that you struggle-with: they find the bugs that you missed. Everybody wants flawless software and as a developer, you probably have your limitations. You are good at developing, but you think like a sane, reasonable person, … and well, that is just not good-enough, when it comes to testing.

A skilled tester will have the extraordinary ability to think of whacky, cuckoo, way-out-of-the-box, “WTH” kinds of stuff. It is unnatural for a developer to think like that. Therefore, when a tester finds some nutso bug, or even something super-obvious, the developers feel grateful that it was caught before it went to production.

…unless the tester acts a jerk about it.

I guess even that is forgivable. Probably.

Unfortunately, the very nature of professional testing, is eventually going to condition testers to be jerks. That sounds a little harsh, I know. Let me explain.

So, when a tester finds a bug, he/she has done their job. “Nice one! Keep it up!”. Every now-and-then, a tester will find a “show-stopper”. This is a kind of bug which would wreak havoc on your system and/or undermine a user’s confidence in your software. Sometimes, it can be something simple, like an error message, or an awkward validation message, and sometimes it can be something subtle, like a security rule which was implemented wrong. Regardless, when a tester finds a “show-stopper”, the response is usually celebration by the test team and frustration by the dev team, and management.

As a tester, your first project will have a few of these discoveries, and they will all seem very exciting and emotional. Over time, a tester will accumulate (trophies) tales about the worst bugs ever found and just how serious they were, “That was a serious flaw! What a catch! …and it was truly a last-minute discovery. …blew our go-live date.” For a tester, these stories are big accomplishments and telling them feels great, because you saved the day.

Not every project will go that way. Developers are clever folks. They also have their own tales about solving these “last minute show-stoppers” and how stressful it was. They will learn the patterns of the bugs, and take measures to ensure that they don’t happen ever again. They will learn from their mistakes and mature in their processes. They will eventually make fewer mistakes and produce better programs. This is great for the developers, but for the testers, not so much.

Conditioning

If you are a tester who has found dangerous/critical/catastrophic/project-killer bugs, those shiny trophies become part of your identity. They make you feel validated. You sacked the opponent’s quarterback, or blocked the field-goal kick. You saved the big game and you feel great about it. On your next project, you expect the same, or better. You want eggs for breakfast and those developers are like chickens in a coop. Grab a basket.

Managers mean-well. It is their jobs to keep you motivated and hungry. They know that winning a trophy feels good. A word or two of encouragement can’t hurt anything. Could it? “Remember when you found that epic bug? When are you going to find another one like that? You can do it!”

Let’s face it, a tester can only find bugs which are there. If there are no serious bugs, then serious bugs won’t be found. As a tester, you understand this. So, when there are no serious bug discoveries, everyone has a reason for celebrating, except for you.

If this continues, you (and possibly your management) will start to doubt or second-guess your skills. Could there be a big bug or two in there, but you didn’t find it? You might even get superstitious and develop a feeling that there must be a show-stopper in there. Some people even resort to invalidly applying statistics. Like “for every [X] lines of code there are [Y] flaws and [Z] serious flaws. Therefore, we must have missed something.” It is totally invalid, but it is an easy mistake to fall-into.

Eventually it will occur to you: You couldn’t have missed [Y] flaws and [Z] serious flaws. So, what if you actully did find them but you just didn’t recognize the significance of them. After all, within the current context, certainly some of the bugs are relatively serious. You could be collecting some new trophies right now. Maybe if you had made an appropriate spectacle, folks would be congratulating you. So why not give it a try?

So you try it. You make a bigger deal about one of the bugs that you find.

It is just what the doctor ordered! People have been waiting for this: a bug to sink their teeth into. Everyone high-fives you and calls you a hero. Yes! And to think, you almost didn’t make a big-deal about it. Golly. What were you thinking?!

Results

When a few weeks go-by, and the testers haven’t found a big-one, they will feel some pressure. They will feel deprived of their joy, and it might motivate them to try something to change that condition. After all, folks need their high-fives, right? You see where I’m going with this. …I’m not saying that the testers will resort to blatant hyperbole. (Not initially, at least). However, I’m thinking of a gentler term like “exaggerate”, or maybe “embellish” or “over-react”. Yes sir! All it takes is a little incentive, and the proper rewards.

Repeat this pattern a few times and guess what? It becomes a habit. In some ways it can even seem like an addiction. If a little exaggeration doesn’t satisfy, maybe try increasing the exaggeration a little.

You can see where this is going. Folks will catch-on that some of this might be a little over-inflated. You can bet that the developers are going to call foul eventually. They have their own incentives to show progress and improvement. If it seems like someone is intentionally trying to embarrass them, then things might get a little un-friendly.

End-game

Management is eventually going to recognize that something is amiss. At that point they have a few tough options: On one hand, you don’t want to demoralize the testers, because they are thinkers and they need their heads in the game, but if you allow the charade to continue, it will become unhealthy, and it will become more difficult to tell when a bug is just-another-bug or it if actually is the fourth horseman of the apocalypse.

This is when people-skills are key. Usually, this is a management task, but an experienced dev lead can help de-escalate things too.

We all want what is best for our project. Although the-end-of-the-world would be pretty interesting and exciting, I think we are better-off delivering the best product possible, and leave the TV-MA stuff for Hollywood.

Posted in IT Horror Stories, IT Psychology, Testing | Tagged , | Leave a comment

Tech debt, rewriting the whole thing

If you’ve ever looked at a messy program, you might think “we should just rewrite this. It would take less time than fixing this mess”. The logic on it seems flawless, right. You are awesome and your ninja moves are “on fleek”. So naturally, if you rewrote the whole thing it would be monumental perfection and covered in bling. It is hard to imagine anything else, right?

If you don’t see the fallacy in the previous paragraph, I implore you to keep reading, because there might be something that you haven’t considered (or you considered it but dismissed it too quickly).

Anatomy of this nightmare?
Often, programmers will try to convince themselves that tech debt is the result of the following:

  • The program was written long ago, before people learned how to make programs that were not impossible messes
  • The program was written by dim-wits who spent their time pounding square pegs into round holes
  • Nobody cared about code quality and nobody cared enough to check-up on any of it, to ensure any level of quality
  • They used the wrong language / database / server / toolset / framework / methodology. The one that they used is guaranteed to produce this sort of spaghetti
  • The guy in charge was not nearly as smart as me. (followed by a smug/condescending belly laugh)

My response: “in your dreams”

You would be amazed at how easy it is for smart people, with good tools, experience, resources, etc. to make mind-boggling messes. It only takes a few un-treated problems to turn a dream project into a nightmare. Do you know them all? Are you sure? Are you vigilantly watching for them?

How will you avoid the same fate?
There is a saying “Those who cannot learn from history are doomed to repeat it.” So before you start your rewrite, you should take the time to learn where the landmines are planted. Here are a few ways to gather intel on that spaghetti:
1. Code review, architectural review, information review (DB) for everything. Make an inventory the tech debt. All of it.
2. Make a list of the flaws. Group them into categories.
3. Identify the solutions to those categories WITHOUT resorting to “just start over”.
4. Identify the causes of those problems and the steps that lead up to each. Eg. “messy code” – what causes messy code. “no configs” – how did they plan to move/promote from dev to prod?
5. Determine how you will detect those problems early and how to remediate them early

Have you overlooked anything?
There is a saying about the Darwinism of errors “if you produce a fool-proof process, the world will eventually produce a better fool”. Now be honest about how much experience are you building upon. Are you usually just really lucky and smart or resourceful? Is that your plan for succeeding, or have you done this successfully before and have experience with your entire toolset, process, etc? Is this project just like your previous string of successes, or is anything different?

Are you sure?
So, now that you assessed the whole thing and understand what went wrong, how it got this bad, how to avoid or undo it, and have a much better plan, ask yourself: knowing all of this, are you sure it is easier to start from scratch than to undo the messes? Are you sure that it will yield a better ROI, or are you just hoping for something new and cool to do?
Don’t answer too quickly. Take a day or two to think about all of this, and be honest with yourself. Then ask yourself if you would still rewrite it if you were restricted to the same technology stack and platform. Again, maybe deep inside, you just want something new, and that is your primary motivation.

Something that you forgot
I know that you are awesome and your code is always perfect and shiny. We both know that tech debt is the byproduct of timelines. Timelines always start optimistic and turn ugly as you get close to your end-date. That is when your tech-debt accumulates. If you had more time, your app would be cleaner. The reason that I bring this up is because, if you choose to rewrite an app, and think you won’t produce just-as-much tech debt, then you must own a time machine or something. Your timeline WILL get squeezed. It will. And you will make a newer fresher mess.

I hope I’ve been able to persuade you to reconsider. Maintenance programming doesn’t sound very sehk-say, but you can’t deny the value. You just have to have the determination and experience to stick with it.

For further reading, link to Joel Spolsky “things to never do”

Posted in Lessons Learned, Methodology | Tagged , , , | Leave a comment

Requirements for Waterfall, Agile and Cowboy

Once again, I am comparing these three approaches to software development. It might seem funny that I could treat “cowboy” like it is a real methodology, but there is some reality to it and I would like to explain how & why it works.

First, let’s compare how each one handles requirements:

Waterfall – You gather all of your requirements before you start developing. Hopefully, you can gather them all correctly, so the project comes-out perfect at the end. It only seems to work if you have skilled & experienced requirements takers and givers. Even then, you need some amazing luck, or low standards or something. Otherwise, at the end, you will discover all of the stuff that you missed, or got wrong. For this reason, people often refer to waterfall as “water-fail”. Of course, that nickname is only funny if you haven’t been burned by it, or your scars have healed.

Agile – You gather some requirements and start working with the ones you have. As the developers are working, the business analysts gather more requirements. You get short bursts of work done, at which time, you discover which requirements were wrong or incomplete. You add those into the next development cycle. Your cycles need to be rather small (1/2 – 2 months for each dev cycle). After failing and correcting enough times, you eventually are likely to get everything right. YMMV.

Cowboy – You don’t really know how to gather requirements. You just take your best guess and start writing a program. Once you have enough working, you show it to the customer/users. They try to figure out how to use your program and give you feedback about what is not working or is awkward, or insufferable. You can’t easily distinguish between which ones are bugs and which are requirements. It doesn’t really matter because it is all part of your “to do” list. Once your “to do” list is empty, you are done.

There is one common thread between all three of these. They all gather requirements and do testing. Some are more formal, and optimistic about their ability to gather requirements. Others are (perhaps) more realistic, and acknowledge that the end-users are going to see some bugs.

The big differentiators are 1) how good are you going to be at giving/getting requirements, and 2) who does the most testing and catches the most bugs.

The funny thing is that, in reality, Waterfall usually reverts to cowboy at the end. Cowboy usually starts with a mini-waterfall and Agile seems to rock back-and-forth between mini-waterfall and mini-cowboy. In fact, most agile projects are mostly mini-waterfall or extra-fancy cowboy.

My point is this: Plenty of teams are able to be successful (to varying degrees) with each of these. The key is to know your strengths and pick an approach that plays to your strengths. The biggest cause for failing at one of these, is thinking that you are the wrong one, and struggling to avoid the way which fits you best.

Posted in Methodology | Tagged , , | Leave a comment

Performance Testing – 201

In my career, I’ve done a few performance studies. (more than just a few). (I’m trying to be humble, but it’s not working). (Sorry).

I’ve had some good mentors along the way, and I’ve done a lot of studying to learn what resources are common bottlenecks, how to detect them, and what can be done about each one. It sounds pretty easy, but sometimes it is harder than it sounds.

A few years back, my team was implementing a new service. It would get a lot of traffic and needed to perform well.  I started talking to one colleague about how to test the performance of this stuff. He stopped me after a few seconds and said that he already knew all about this stuff, and I should step aside so he could bust-out a quick performance study, and prove that everything was performing great. The way that he said it (in my experience) is usually “a tell”. No worries.  I stepped back and waited to be impressed.

First he came back with an answer like “Yep, it works great. What’s next”. So I asked him for some statistics to back his claim. He left and came back in a few hours with a graph that looked like this:

Graph-bland-zoomed

I was like, “um, what am I looking at?”
He was like, “Performance graph”
I was like, “No. What does this mean? There are no descriptions.”

So he went away for a few minutes and came back with this:

Graph-bland

I said that I liked the descriptions, but I didn’t find it very convincing. How did that prove that our system was performing well and would scale nicely?

He said “Look. Everything is at the bottom. Nothing is over 40%. We are good-to-go”.

He didn’t quite understand why I wasn’t satisfied yet. So I elaborated: Flat lines don’t tell you anything. If none of those lines reach the top, then your test hasn’t confirmed anything about your capacity. You need to exercise the system while you measure it. This will help you identify your bottlenecks. And don’t tell me that there are not bottlenecks, because every system has them. Some are narrow, some are spacious and some are gigantic. You still need to 1) perform tests which reveal the capacities of a system. 2) monitor the resources which are most-likely to be your bottlenecks and affect performance.

Most relevant metrics:

From my experiences, you will usually find what you need when you to measure these resources:

  • Processor – Total, (or each processor ONLY if there are 8 or less. It is difficult to read a graph of 32+ simultaneous CPUs)
  • Logical Disk – % Disk time – for each disk/controller
  • Logical Disk – Current Disk Queue Length – for each disk
  • Memory – % committed bytes in use
  • Memory – Page faults/sec
  • Network interface – bytes total/sec
  • Network interface – output queue length
  • Objects – Threads
  • Physical Disk – % disk time
  • Process – Handle count
  • Process – Page faults/sec
  • Processor – Interrupts/sec
  • Server work queue – active threads
  • Server work queue – Queue length
  • Server work queue – Total operations/sec
  • System – processor work queue length
  • System – Processes

Method:

  1. Use PerfMon from a different machine (NOT THE MACHINE THAT YOU ARE MONITORING). Of course, this will cause extra network usage, but it is negligible compared to what is happening on the machine that you are testing. While PerfMon is collecting and graphing this data, it will be very busy. Which is why you don’t want to run it from a machine that you are monitoring.
  2. Set up PerfMon to record its data to a file. That way, you can take your time to analyze each metric, individually, later. Sometimes it is necessary to zoom-in-on segments of a graph, especially during interesting time periods of a test (peaks, gaps). You won’t be able to do this effectively real-time (during a test).
  3. Try several stress/capacity tests with variances
    1. A human doing normal usage (baseline)
    2. Five humans testing rapidly
    3. One bot working very rapidly
    4. Five bots working very rapidly
    5. Twenty five bots working rapidly
  4. During your capacity tests, measure the following
    1. Elapsed time for each action (round-trip, page, response, etc)
    2. Measure how many actions-per-minute can be performed for each action
    3. Detect if any errors happened during your tests. Were they the byproduct of stressing your system? What was the threshold (capacity) at the time that you broke it.
    4. If you break the system, what was the error. Can it be correlated to a line of code, database table, or external system?

Analysis:

  1. Any metric with a name like Queue or Queue Length is best if it is zero. If it ever is above zero, that means the system is waiting for a resource (bottleneck). If a queue length spikes up, that is an indicator that you are above capacity for that resource. (Disk, network, processor)
  2. Ideally, your CPUs should reach 100% for much of a capacity test. If your CPUs never reach 100% (maybe only 95% or 98% ceiling) that is bad, because something is bottlenecking your system and preventing it from full utilization. You need to find the culprit.
  3. If your CPUs seem to do a little dance, where one is up while the other is down, and they seem to be mirrors of each other, then that means your processes are single-threaded and you are not running enough simultaneous work, or your system is single-threaded and you have serious problem.
  4. Memory, objects, threads, should generally be flat or hit a plateau. If they go steadily up, without leveling-off, then you might have a resource leak and you need to find it.
  5. Compare your different test runs to determine how many records could be processed during peak utilization.
  6. If your tests broke your app, what line(s) of code, or external resources are responsible

Bottom line: See #2.  Your best outcome is: having all of your CPUs spiked. If you have that, you are probably good. Otherwise, you have more work.

In my experience, you are most likely to observe a bottleneck because of drive latency or a database that requires better tuning (indexing).  Database tuning is pretty easy.  With the falling cost of SSDs, even drive latency is easy to overcome.

Performance studies can be pretty easy, once you know what you are doing.  Start with the right objective, measure the right resources, find your capacities, and then you may announce that you are good to go, and I might just believe you (maybe).

 

Posted in Optimization, Review, Testing | Tagged , , , | Leave a comment

Comparing yourself to others

This is a frequent topic with my kids and even some peers. “I’m not good enough” or “I wish I could be like that person. They have it all”. “I don’t think I could ever achieve what that person has.”

When I was in college I did an internship during my senior year. The workplace was pretty high-tech and I really admired my boss. I remember agreeing with the other interns, “someday I want to be just like that guy”. I also remember how disappointed I was, one year later, when I was asking my boss for some advice about a program and he confessed that he didn’t know. In one small year, I had already surpassed my boss (on a technical level). I had already achieved my “someday” goal. So now what? Which way was upward and onward? It took me a few days to get over it and pick a new goal.

Your best goal is always going to be improving yourself, perpetually. It will help you grow, and make you into a better you. You never know what you can achieve until you really stretch yourself. It sounds like a stupid goal, “just be better than you were yesterday”. After all, every day you are unlikely to be less than you were yesterday. Duh. Right?

The key is not to measure yourself in millimeters, but in inches or feet. Every day or every week, think about greater things that you could be doing or learning, or even trying.  Make sure that you have chosen something (and not just coasting).  Make a point of working on yourself and check that you are showing measurable progress. Have criteria for your growth. Pick a good stride and stick with it. That should be your goal: to keep up with your targeted pace.

One last thing. Be careful about picking unachievable goals. Although “a journey of a million miles begins with one footstep”, nobody really just goes on a walk for a million miles, and survives. Likewise, it isn’t very helpful to compare yourself to someone like Warren Buffet, or Bill Gates or Arnold Schwarzenegger. You must know that you will not be able to top folks like those. Face it, any person could only become “so much” like Michael Jordan or Justin Timberlake. You still need to be realistic about physical barriers (money, anatomy, intellect).

The point of the statement (about journeying a million miles), is to pick a direction, get up and get going. Instead of saying “nobody can travel a million miles”, you should be thinking “a hundred miles is challenging, but possible for me. How would I do it?” and then build a plan and get going.

This is the best way to compare yourself to others. Don’t aim too low, or too high. Pick a good pace and persevere. Be prepared to pick a better goal (if you need to). Ready, go!

Posted in Lessons Learned, Personal | Tagged , | Leave a comment

Why your web server needs a data mart

The other day, I had a fun discussion with a colleague. We talked about data marts. Just in case you are not familiar with the term, let me take a moment to describe the concept.

A data mart is basically a small database that contains a sub-set of your company’s data. It is a copy.  On the surface, it might sound like a waste of time. After all, maintaining two sets of data, can be challenging and it is an opportunity for mistakes.  So why bother?

Background

Most IT systems (programs) store their data in some kind of database. Some systems are (primarily) meant for gathering data and some are meant for displaying data. Big companies tend to have very large databases. Getting useful data out of them can be slow. Inserting data into a big DB can slow-down other people who are trying to read data. On a web site, you don’t want “slow”. There are of course, ways around this.

Managing/Dealing-with Large Data

When you think about data, it is best to think of your data like your money. Keeping it is good. The more (useful data) you gather, the more power it can provide to you. You always need some of it at-hand. Once you gather a bunch, you probably want to start putting some away (like a 401k). Your goal will be to accumulate it steadily. The expectation is that it will be more valuable someday. Having a good plan is important, so you can get a good value out of it someday.

Here is something that you probably don’t want to do: carry all of your money around with you all of the time. Why? Because you don’t want someone to steal it, it is probably big and bulky, and you don’t really need it right now. Just put it in the bank and you can get to it when you need it.

Data is like this too. On your web server, it is pretty rare for someone to need all of the data that your company has been accumulating. You usually only need current data, or a few summaries, and maybe sometimes, you need a big chunk, but not all of the time.

Solution: Data Mart

A data mart is basically, like a wallet full of data or maybe a debit card for data or something like that. It is a smaller database that only contains the data that your web server is typically going to use on a day, for one system. You don’t have your debit card connected to your 401 k, right? You also don’t need your web server connected to all of your data. Your data is also probably pretty bulky and slow. Maybe bad people would like to do evil stuff with it, so you should protect it and only carry-around the data that you actually need.

To summarize, a data mart for your web server, benefits you by:

  • Isolating traffic – web server demand is isolated from your main DB. So all web traffic doesn’t affect your main DB, and traffic to your main DB doesn’t slow-down your web server. This protects you against a DDOS attack.
  • Smaller data = faster – Certainly, it is much faster to query a smaller amount of data. This protects you against normal, healthy traffic and yields quick responses.
  • Less data = less exposure – If your web server ever becomes compromised (hacked), the culprits are likely to get everything that is on the server, and might get everything connected to it. If you plan your systems for this possibility, you will see that this defensive posture (of having less data) minimizes the damage which could occur from a data breach.

Bottom line: keep different (isolated) systems for your internal users, and external users. It takes a little more thinking, planning, and equipment, but it is much better than walking around with a big sack of money.

Posted in Architecture, Database, Optimization | Tagged , , , | Leave a comment