The Monday morning of Week 5 started with an argument.
Maya walked into the office at 7:30 AM to find Jake sitting at his desk, arms crossed, staring at his monitor with an expression that could curdle milk.
“Morning,” Maya said cautiously.
“We need to talk,” Jake said.
Maya set down her coffee. “Okay. What’s up?”
“I got your email last night,” Jake said. “The one about Week 5 plans. Database modernization using Oracle Cloud Exadata.”
“Right,” Maya said. “That’s the plan for this week—”
“You want to move our databases to a managed service,” Jake interrupted. “You want to give Oracle control of our databases.”
“I want us to use Oracle Exadata on Google Cloud,” Maya corrected. “Which is Oracle’s engineered system running on GCP infrastructure. It’s still Oracle Database, still fully featured, still something you have administrative access to.”
“But Oracle manages the infrastructure,” Jake said. “The storage, the patching, the high availability configuration. We lose control.”
“We lose operational burden,” Maya countered. “There’s a difference.”
“Maya, I’ve been a DBA for twenty-six years,” Jake said, his voice tight. “I know how to manage Oracle databases. I know how to tune, optimize, and keep them running. You’re asking me to hand that over to a service that I can’t see inside of, can’t customize the way I want, can’t—”
“Can’t spend forty hours a quarter patching?” Maya suggested gently. “Can’t spend weekends troubleshooting storage performance? Can’t wake up at 3 AM when a backup script fails?”
Jake was quiet.
Maya sat down next to him. “Walk me through what you’re really worried about.”
Jake took a breath. “I’m worried that if we move to managed Exadata, my job becomes irrelevant. If Oracle handles the infrastructure, the backups, the patching, the high availability—what’s left for me to do? I become a glorified report writer. And then in a year, someone asks why we’re paying a DBA salary for work that doesn’t require a DBA.”
“Is that what you think I’m trying to do?” Maya asked. “Eliminate your position?”
“I don’t know,” Jake said honestly. “Two months ago, we were running PeopleSoft the traditional way. Now we’re doing infrastructure-as-code, observability platforms, and cloud migrations. Every week, something I used to do manually gets automated. At some point, you automate me out of a job.”
Maya was quiet for a moment, choosing her words carefully.
“Jake, do you remember last week? When we built the observability dashboards?”
“Yeah.”
“Who configured Oracle to export those metrics?” Maya asked. “Who knew which wait events matter for PeopleSoft performance? Who explained why we should monitor the DB file sequential read versus the DB file scattered read?”
“I did,” Jake said.
“Right. Because you understand Oracle performance at a level none of us do. Now let me ask you something else: how much time do you spend per month on tasks that require that deep expertise versus tasks that are just… keeping the lights on?”
Jake thought about it. “Maybe twenty percent on actual performance tuning and optimization. The rest is backups, patches, monitoring disk space, restarting failed jobs, dealing with tablespace extensions—”
“Operational toil,” Maya finished. “Important work, but not work that requires twenty-six years of expertise. What if we could flip that ratio? What if eighty percent of your time was spent on performance optimization, data architecture, query tuning, and helping developers write better SQL? What if the tool got automated?”
“That’s what managed Exadata does?” Jake asked skeptically.
“That’s what it enables,” Maya said. “Let me show you something.”
She pulled up a document she’d been working on over the weekend: a detailed comparison of their current Oracle environment versus Oracle Exadata on GCP.
Current State: Oracle Database on Self-Managed RAC
Jake’s Time Allocation (Monthly):
Backup management and monitoring: 12 hours
Patch management and testing: 16 hours (quarterly spike: 40 hours)
Storage management: 8 hours
High availability configuration and testing: 6 hours
Performance monitoring: 10 hours
Performance tuning and optimization: 8 hours
Incident response and troubleshooting: 12 hours
Capacity planning: 4 hours
Documentation and runbooks: 4 hours
Total: 80 hours/month on average
Breakdown:
70% operational toil (backups, patches, storage, HA, monitoring, incidents)
30% strategic work (optimization, tuning, capacity planning)
Maya scrolled down to the next section.
Future State: Oracle Exadata Database Service on Google Cloud
Jake’s Time Allocation (Monthly):
Backup management: 0 hours (automated by Oracle)
Patch management: 2 hours (reviewing and approving automated patches)
Storage management: 0 hours (automated scaling)
High availability configuration: 0 hours (built-in, automated)
Performance monitoring: 4 hours (dashboard review, alert investigation)
Performance tuning and optimization: 30 hours
Query optimization and developer support: 20 hours
Data architecture and design: 15 hours
Capacity planning: 3 hours (simplified with automated scaling)
Strategic database projects: 6 hours
Total: 80 hours/month
Breakdown:
8% operational toil (patch approval, monitoring, capacity planning)
92% strategic work (optimization, tuning, architecture, developer support)
Jake studied the comparison. “You’re saying I’d spend the same amount of time, just on different things.”
“Not just different things,” Maya said. “More valuable things. Things that actually leverage your expertise. Right now, you’re spending twelve hours a month babysitting backups. Oracle’s Exadata service handles that automatically with built-in snapshot capabilities and point-in-time recovery. Those twelve hours could be spent optimizing SQL queries that are costing us database performance.”
“But what about when something goes wrong with the backups?” Jake asked.
“Then Oracle’s support team troubleshoots it,” Maya said. “Because it’s their infrastructure, their responsibility. You’re not on call at 3 AM because a backup script failed. They are.”
“And if I need to tune something at the storage layer?” Jake pressed.
“You can’t,” Maya admitted. “Because Oracle’s engineers optimize Exadata’s storage layer. But here’s the thing—when was the last time you actually needed to tune storage parameters for performance?”
Jake thought about it. “Honestly? Not in years. We set it up correctly initially, and it’s been stable since then. Most performance issues are bad SQL or missing indexes, not storage configuration.”
“Exactly,” Maya said. “So we’re protecting your ability to tune something you rarely need to tune, at the cost of you spending twelve hours a month managing backups you shouldn’t have to think about.”
She pulled up another document. “Let me show you the cost analysis. Because this isn’t just about your time—it’s about total cost of ownership.”
Current State: Self-Managed Oracle RAC on Premise
Infrastructure Costs:
Database servers (2 nodes): $180K capital (5-year depreciation: $36K/year)
Storage (SAN): $240K capital (5-year depreciation: $48K/year)
Network equipment: $40K capital (5-year depreciation: $8K/year)
Data center space, power, cooling: $24K/year
Hardware maintenance contracts: $32K/year
Software Costs:
Oracle Database Enterprise Edition licenses: $94K (already owned)
Oracle RAC licenses: $47K (already owned)
Oracle support contracts (22% annually): $31K/year
Personnel Costs:
Jake’s time on database infrastructure: 80 hours/month × $85/hour = $81,600/year
Infrastructure team support: $24K/year
Total Annual Cost: $284,600/year
Maya scrolled to the comparison.
Future State: Oracle Exadata Database Service @ Google Cloud
Infrastructure Costs:
Exadata X9M Quarter Rack: $168,000/year (consumption-based pricing)
Includes: compute, storage, networking, all infrastructure
Includes: automated backups, patching, and high availability
Includes: Oracle’s 24/7 infrastructure support
Software Costs:
Oracle Database Enterprise Edition: included in Exadata service
Oracle RAC: included in Exadata service.
Oracle support: included in Exadata service
Personnel Costs:
Jake’s time on database infrastructure: 6 hours/month × $85/hour = $6,120/year
Jake’s time on strategic work: 74 hours/month (no additional cost, reallocated time)
Infrastructure team support: $0 (no longer needed)
Total Annual Cost: $174,120/year
Annual Savings: $110,480
Jake stared at the numbers. “We’d save $110,000 per year by moving to Exadata?”
“And that’s conservative,” Maya said. “I didn’t account for the cost of our time during the quarterly patch cycles. I didn’t include the cost of storage upgrades every three years. I didn’t factor in the opportunity cost of incident response time. The real savings are probably closer to $140,000 annually.”
“But we already own the hardware,” Jake protested. “That’s sunk cost.”
“True,” Maya said. “But the hardware is six years old. In two years, we’ll need to replace it. That’s another $460,000 capital expense that we can avoid entirely by moving to Exadata as a service. Plus, we can decommission the data center space and stop paying for power and cooling.”
“What about the Exadata cost?” Jake asked. “That’s $168,000 per year. That’s not cheap.”
“It’s not,” Maya agreed. “But it’s less than what we’re spending now, and it includes things we currently pay for separately—software licenses, support contracts, infrastructure maintenance. Plus, it scales. Right now, if we need more database capacity, we have to buy another SAN array and another server. Six-month lead time, huge capital expense. With Exadata on GCP, we can scale up in hours by adjusting our consumption tier.”
Jake was carefully reading the cost breakdown. “You said Jake’s time on database infrastructure drops to six hours per month. What are those six hours?”
“Reviewing automated patch schedules and approving them, checking capacity trends, and investigating any performance alerts that come through observability,” Maya said. “The stuff that actually requires your judgment. Everything else—the backups, the storage management, the HA configuration, the routine monitoring—that’s handled by Oracle’s automation.”
“And the other seventy-four hours I’m supposedly spending on strategic work,” Jake said. “What does that actually look like?”
Maya pulled up a new document. “I’m glad you asked. I’ve been thinking about this a lot. Here’s what a modern DBA role looks like when you’re not buried in operational toil.”
The Strategic DBA: Redefining Value
Performance Optimization (30 hours/month):
Proactive SQL tuning based on observed query patterns
Identifying and fixing performance anti-patterns
Working with application developers on database-efficient code
Analyzing and optimizing batch job performance
Eliminating performance bottlenecks before users notice them
Data Architecture (15 hours/month):
Designing efficient database schemas for new functionality
Planning and executing data model improvements
Evaluating and implementing new Oracle features
Capacity planning based on business growth projections
Data archiving and retention strategy
Developer Enablement (20 hours/month):
Reviewing SQL queries in code reviews
Teaching developers database best practices.
Creating reusable query patterns and templates
Building tools that help developers write better SQL
Pair programming on complex database interactions
Platform Evolution (9 hours/month):
Evaluating new database technologies and features
Planning database modernization initiatives
Contributing to platform architecture decisions
Researching industry best practices
Strategic projects (machine learning, analytics, etc.)
Jake read through the list, his expression slowly changing from defensive to thoughtful.
“This is what you think I should be doing instead of managing backups,” he said.
“This is what you’re uniquely qualified to do,” Maya said. “Nobody else on the team can do this work at the level you can. Priya’s a good developer, but she doesn’t understand Oracle internals as well as you do. Tom knows infrastructure, but he can’t tune SQL. Sarah’s brilliant at automation, but she doesn’t have your depth of database expertise.”
“But they can all click a button to run a backup,” Maya continued. “And frankly, Oracle’s automated backup system can do it better than any of us because it’s designed specifically for that purpose. So why are we paying you $85 an hour to babysit backups when you could be preventing the kind of performance issues that slow down the entire university?”
Jake was quiet, processing.
“Let me ask you something,” Maya said. “In the last month, how many times have you wanted to work on a database performance issue but couldn’t because you were dealing with operational overhead?”
Jake exhaled. “Last week. We had a batch job running slowly—probably a missing index or a bad query plan. But I was in the middle of patch testing, and by the time I got to it, the job had finished. It’ll probably be slow again next month, but I haven’t had time to investigate properly.”
“That’s exactly what I’m talking about,” Maya said. “You have expertise that could have prevented that slow batch job. But you were spending your time on patch testing that could be automated. That’s a misallocation of your talent.”
“Okay,” Jake said slowly. “I see the argument. But I still have concerns. What about control? Right now, if I need to change an Oracle parameter, I can. With Exadata as a service, do I lose that?”
“Some parameters you can still change,” Maya said. “Database-level configuration, optimizer settings, memory allocation within your allocated resources—all of that’s still under your control. What you can’t change is infrastructure-level stuff like storage configuration or networking. But again, when was the last time you needed to change those?”
“Fair point,” Jake admitted.
“And here’s the thing,” Maya continued. “The infrastructure parameters Oracle sets for Exadata are based on thousands of customer deployments and years of engineering. They’re probably better than what we configured six years ago when we set up our RAC cluster.”
Tom had wandered over during the conversation and was listening from the doorway.
“Can I jump in here?” Tom asked.
“Please,” Maya said.
“Jake, I had the same reaction you’re having when Maya talked about infrastructure as code,” Tom said. “I felt like we were throwing away fifteen years of experience. But what I learned is that automation doesn’t replace expertise—it multiplies it. My knowledge of how to configure an application server is now in Chef cookbooks that anyone can run. That doesn’t make me less valuable. It makes my knowledge more valuable because it’s reusable.”
“And you’re not worried about job security?” Jake asked.
“I was,” Tom admitted. “But then I realized—organizations don’t pay us to click buttons. They pay us to solve problems and make good decisions. If all we’re doing is clicking buttons, then yeah, we should worry. But if we’re solving problems? That’s always going to be valuable.”
Sarah had joined them now, too, along with Marcus and Priya.
“What’s going on?” Sarah asked.
“Jake’s concerned about moving to managed Exadata,” Maya explained. “He’s worried about losing control and becoming irrelevant.”
“Oh, I can speak to that,” Sarah said. “Before I came here, I worked at a company that moved from self-managed databases to Amazon RDS. The DBAs freaked out initially. Same concerns—losing control, becoming unnecessary.”
“What happened?” Jake asked.
“Their jobs got way more interesting,” Sarah said. “They stopped spending time on backups and patches and started building internal database tooling. They created query analysis frameworks that automatically identified slow queries and suggested optimizations. They built a data pipeline automation. They became strategic partners to the development teams instead of people you ticket when the database is full.”
“And nobody got laid off?” Jake asked.
“Nobody,” Sarah confirmed. “They just redirected their energy to higher-value work. One guy became the data architecture lead. Another one built an entire internal analytics platform. The third one began providing database performance consulting to all development teams. They went from firefighters to architects.”
Maya pulled up one more document on the screen. “Jake, I want to show you something else. This is what happens to organizations that resist managed services.”
The Traditional DBA Career Path: Self-Managed Databases
Year 1-5: Learning database administration, operational basics
Year 6-15: Deep expertise in backup/recovery, performance tuning, HA configuration
Year 16-25: Senior DBA, mentoring others, and making architecture decisions
Year 26+: ???
Common challenges:
Skills become increasingly niche as the industry moves to managed services.
Operational toil increases with system complexity.
Difficult to stay current with modern data platforms
Career mobility decreases (fewer companies want self-managed DBAs)
Burnout from on-call and operational burden
The Modern DBA Career Path: Managed Services + Strategic Work
Year 1-5: Learning database fundamentals
Year 6-15: Performance optimization, query tuning, data modeling
Year 16-25: Data architecture, platform strategy, cross-platform expertise
Year 26+: Database architect, data platform lead, strategic advisor
Advantages:
Skills remain relevant (performance optimization always matters)
Learning time freed up for new technologies (cloud platforms, analytics, ML)
Career mobility increases (strategic skills transfer across companies)
Better work-life balance (less on-call operational burden)
Higher compensation (strategic roles pay more than operational roles)
Jake studied both paths. “You’re saying that by resisting managed services, I’m actually limiting my career growth.”
“I’m saying the industry is moving to managed services whether we like it or not,” Maya said gently. “AWS, GCP, Azure—they’re all betting big on managed databases. Oracle is investing heavily in Exadata as a service. In five years, most enterprise databases will be managed services. DBAs who only know how to manage infrastructure will struggle. DBAs who can optimize performance and design data architectures will thrive.”
“And you think I can make that transition?” Jake asked.
“I know you can,” Maya said. “Because you’re already doing it. Last week, you spotted a performance issue in our monitoring data before it became critical. That’s the kind of proactive work that managed services enable. You weren’t busy patching, so you had time to actually analyze performance trends.”
Priya spoke up. “Jake, can I ask you something? What part of your DBA work do you actually enjoy?”
Jake thought about it. “The puzzle-solving. When someone says, ‘this query is slow,’ and I get to figure out why. Looking at execution plans, finding the missing index or the bad join, fixing it, and seeing the performance improve. That’s satisfying.”
“That’s the work Maya’s saying you’d get to do more of,” Priya said.
“Yeah,” Jake said quietly. “I guess it is.”
Marcus added, “And honestly, Jake, we need you to do more of that. I’ve got integration queries that I know could be faster, but I don’t know enough about Oracle to optimize them. If you had more time to work with me on that, our integrations would be way more efficient.”
“Same with me,” Priya said. “I write SQL in my customizations, but I know I’m probably doing dumb things. I could use a DBA who has time to review my queries and teach me better patterns.”
Jake looked at his team—people who wanted to learn from him, who valued his expertise, who needed him to do more strategic work than he currently had time for.
“Okay,” Jake said finally. “I’m still nervous about this. But I see the argument. Show me what this Exadata migration actually looks like.”
Maya pulled up the migration plan she’d drafted. “We’re not ripping and replacing. We’re doing a methodical migration with extensive testing.”
Exadata Migration Plan: 6 Weeks
Week 5 (This Week): Planning and Preparation
Jake evaluates Exadata capabilities vs. the current RAC setup.
Identify any features we use that work differently on Exadata.
Build a test plan for functionality validation.
Set up Exadata test environment on GCP.
Week 6: Development Environment Migration
Migrate the dev database to Exadata.
Run full test suite
Validate performance, functionality, and integrations.
Team learns Exadata management interfaces.
Week 7: QA Environment Migration
Migrate the QA database using the refined process.
Extended testing with real workloads
Performance benchmarking
Backup/recovery testing
Week 8: Performance Validation
Compare performance metrics: Exadata vs. RAC.
SQL query performance analysis
Batch job timing validation
User acceptance testing
Week 9-10: Production Migration Planning
Final migration runbook
Rollback procedures
Communication plan
Schedule maintenance window
Production Migration: Week 11
Execute during scheduled maintenance.
Monitor closely post-migration
Jake’s full attention is on performance validation.
“Six weeks from planning to production,” Maya said. “And you’re involved in every step. You’re the one validating that Exadata meets our performance requirements. You’re the one deciding if we’re ready to migrate production. This isn’t me taking control away from you—it’s you using your expertise to evaluate a better platform.”
Jake nodded slowly. “Okay. I can work with this. But I have conditions.”
“Name them,” Maya said.
“One: if we migrate to Exadata and performance is worse than our current RAC setup, we roll back. I’m not going to sacrifice database performance to save money.”
“Agreed,” Maya said. “Performance is non-negotiable.”
“Two: I’m the one who decides when we’re ready to migrate production. If I say we need more testing, we take more time.”
“Absolutely,” Maya said. “You’re the database expert. I trust your judgment.”
“Three: I want time allocated for learning. If I’m going to become this strategic DBA you’re describing, I need to actually learn new skills. Performance tuning, data architecture, and modern analytics platforms. I can’t just figure it out on my own while doing my regular job.”
“I can support that,” Maya said. “How about four hours per week for focused learning? Online courses, conferences, certifications—whatever you need.”
“Four hours per week is good,” Jake said. “And one more thing: I want regular check-ins with you about my career development. If we’re redefining what a DBA does, I want to make sure I’m actually growing into that role and not just spinning my wheels.”
“Monthly one-on-ones focused on career development,” Maya offered. “On top of our regular status meetings. We’ll talk about what you’re learning, what projects you want to work on, and where you want to grow.”
“Okay,” Jake said, extending his hand. “I’m in. Let’s evaluate Exadata.”
Maya shook his hand. “Thank you for being open to this. I know it’s uncomfortable.”
“It’s terrifying,” Jake corrected with a slight smile. “But Tom’s right—I don’t want to be clicking buttons for the next ten years. If managed services free me up to do more interesting work, I should at least give it a fair evaluation.”
As the impromptu meeting dispersed, Sarah lingered behind with Maya.
“That was well handled,” Sarah said quietly. “You could have pulled rank and just mandated the migration. But you made the case and let him come to the decision himself.”
“People don’t resist change,” Maya said. “They resist being changed. If I’d just mandated Exadata, Jake would have complied but resented it. By walking him through the reasoning and giving him control over the migration, he’s bought in. Now he’ll make this succeed because he chose it.”
“Very Machiavellian,” Sarah said, grinning.
“Very practical,” Maya corrected. “We have eight weeks left to prove PeopleSoft modernization works. I need my team engaged and motivated, not compliant and resentful. Jake’s expertise is critical to this working. I need him all-in.”
“Do you think Exadata will actually be better than our current setup?” Sarah asked.
“Honestly? I think it’ll be roughly equivalent performance-wise with significantly lower operational burden,” Maya said. “But the real win isn’t performance—it’s freeing Jake to do higher-value work. If we can show Harrison that we’re not just cutting infrastructure costs, but also making our team more strategic and valuable, that’s a compelling story.”
“Strategic team modernization,” Sarah said. “Not just technology modernization.”
“Exactly,” Maya said. “The platform doesn’t matter if the people operating it aren’t growing.”
Wednesday Afternoon: The Exadata Evaluation
By Wednesday afternoon, Jake had spent two days deep in Oracle Exadata documentation, architecture diagrams, and performance whitepapers. The team gathered for his evaluation presentation.
“Alright,” Jake said, pulling up his slides. “Maya asked me to evaluate whether Oracle Exadata Database Service on Google Cloud is a viable replacement for our current RAC environment. Here’s what I found.”
He clicked to his first slide: a comparison chart.
Oracle RAC (Current) vs. Exadata (Proposed)
Architecture:
RAC: Two database nodes, shared SAN storage, application-level high availability
Exadata: Integrated system with compute nodes and intelligent storage servers
Performance Features:
RAC: Standard Oracle performance (dependent on our tuning)
Exadata: Smart Scan, Hybrid Columnar Compression, Storage Indexes, Flash Cache
High Availability:
RAC: Node failover (we manage), manual storage failover
Exadata: Automatic failover, storage redundancy built in, automated recovery
Backup/Recovery:
RAC: RMAN scripts we maintain, recovery time depends on backup size
Exadata: Automated snapshots, incremental forever backups, fast recovery
Patching:
RAC: Manual patch application, extensive testing, quarterly 40-hour effort
Exadata: Rolling patches with zero downtime, Oracle-tested combinations
“First thing I looked at was whether Exadata could actually run PeopleSoft,” Jake said. “Answer: yes. Oracle specifically supports PeopleSoft on Exadata. In fact, several large universities are already running it—Ohio State, University of Michigan, Penn State. So we’re not pioneers. We’re followers, which is good for risk management.”
He clicked to the next slide.
Performance Analysis
“Second question: will it be faster, slower, or the same as our current setup? This is what I spent most of my time on.”
Jake pulled up a detailed analysis. “Exadata has several performance features we don’t have with standard RAC. Smart Scan pushes query processing down to the storage layer, dramatically speeding up full-table scans. Hybrid Columnar Compression can reduce storage requirements by 10x for historical data. Flash Cache speeds up frequently accessed data.”
“But here’s the thing,” Jake continued. “Those features mainly help with analytics workloads and data warehouse queries. PeopleSoft is primarily an OLTP workload—lots of small transactions, index lookups, not many full table scans. So we won’t see massive performance gains from Smart Scan.”
“So it won’t be faster?” Tom asked.
“I didn’t say that,” Jake clarified. “The flash cache and faster storage will help our random read performance, which PeopleSoft does a lot of. And the automated performance tuning that Exadata does in the background is better than what I do manually. Overall, I’d expect 10-20% better performance for typical PeopleSoft operations, with potentially much better performance for reporting and analytics.”
“What about batch jobs?” Priya asked.
“That’s where it gets interesting,” Jake said. “Our batch jobs do a lot of full table processing. Smart Scan could significantly speed those up—potentially 30-50% faster. But I won’t know for sure until we test with real workloads.”
He clicked to the next slide: Operational Considerations.
“Third question: what do we give up by moving to a managed service?”
What We Lose:
Direct access to storage layer configuration
Ability to customize RAC parameters that we never customize
Some diagnostic capabilities (replaced by Exadata-specific tools)
The satisfaction of manually managing infrastructure (not actually a loss)
What We Gain:
Automated patching with zero downtime
Better performance out of the box
Oracle’s 24/7 infrastructure support
Automated scaling (add capacity in hours, not months)
Advanced performance features (Smart Scan, compression, flash cache)
Simplified disaster recovery (automated snapshots, fast recovery)
“Honestly?” Jake said. “Most of what we ‘lose’ is stuff I rarely touch. We set up our RAC cluster six years ago and haven’t changed the fundamental configuration since. I’ve tweaked database parameters, sure, but I can still do that on Exadata. The infrastructure-level stuff we lose access to? I won’t miss it.”
“What about the diagnostic tools?” Marcus asked. “You use those for troubleshooting, right?”
“I use Oracle’s diagnostic tools—AWR reports, SQL tuning advisor, execution plans,” Jake said. “All of those still work on Exadata. What I lose is access to low-level storage diagnostics. But Exadata has its own diagnostic tools that are actually better for that platform. It’s not a loss—it’s a different toolset.”
Jake pulled up his next slide: Risk Assessment.
Migration Risks:
High Risk:
None identified
Medium Risk:
Performance regression for specific queries (mitigation: extensive testing)
Team learning curve on Exadata management (mitigation: training and Oracle support)
Migration execution issues (mitigation: detailed runbook, tested rollback)
Low Risk:
Compatibility issues (PeopleSoft is certified on Exadata)
Functional gaps (Exadata is a superset of RAC capabilities)
“I spent a lot of time looking for showstopper risks,” Jake said. “Things that would make this migration a bad idea. I didn’t find any. The biggest risk is that we screw up the migration itself, not that Exadata can’t handle our workload.”
Sarah raised her hand. “What about vendor lock-in? Once we’re on Exadata, are we stuck with Oracle forever?”
“We’re already stuck with Oracle,” Jake said bluntly. PeopleSoft runs on Oracle Database. Whether it’s RAC in our data center or Exadata on GCP, we’re committed to Oracle. The Exadata service doesn’t increase our lock-in—it just changes who operates the infrastructure.”
“Fair point,” Sarah said.
Jake pulled up his final slide: Recommendation.
Recommendation: Proceed with Exadata Migration
Rationale:
Meets all functional requirements
Expected 10-20% performance improvement for OLTP, 30-50% for batch
Reduces operational burden by ~90%
Lowers total cost by $110K annually
Enables strategic DBA work vs. operational toil
Low migration risk with a proven rollback plan
Conditions:
Complete testing in dev and QA before production migration
Performance validation with real workloads
Team training on Exadata management
My approval is required before production migration.
“I’m recommending we do this,” Jake said. “Not because Maya told me to, but because it’s legitimately a better platform for less money with less operational overhead. The only way I’d recommend against it is if we found performance issues in testing. But I don’t expect to find them.”
The room was quiet.
“Questions?” Jake asked.
Tom spoke up. “Jake, are you actually comfortable with this? You were pretty opposed on Monday.”
“I was,” Jake admitted. “Because I was thinking about what I’d lose. But after two days of really looking at Exadata—reading the architecture docs, reviewing performance benchmarks, and thinking through the operational model—I’m convinced this is the right move. Not just for cost savings. For operational excellence.”
“What changed your mind?” Priya asked.
“Two things,” Jake said. “First, I realized I was defending my ability to do work I don’t actually want to do. Managing backups isn’t fulfilling. Patching databases isn’t fun. I was defending that work because I thought it made me valuable. But it doesn’t. What makes me valuable is my ability to optimize database performance and to design effective data architectures. Exadata frees me up to do more of that.”
“And second?” Maya prompted.
“Second, I realized that resisting managed services is like… you know how some sysadmins refused to learn virtualization ten years ago because they thought it was ‘not real infrastructure’? And then virtualization became standard, and those sysadmins became obsolete? I don’t want to be that person. The industry is moving to managed services. I can either complain about it and become irrelevant, or I can adapt and stay valuable.”
“That’s a pretty mature perspective,” Marcus said.
“I’m twenty-six years into my career,” Jake said. “I’d like to have another twenty years. That means evolving with the industry, not fighting it.”
Maya stood up. “Alright. Jake recommends proceeding with the Exadata migration. I agree. Unless anyone has serious objections?”
Silence.
“Good,” Maya said. “Here’s the plan for the rest of this week. Jake, you’re setting up the Exadata test environment on GCP. Sarah, you’re helping him with the GCP integration. Tom, start documenting our application server connection strings so we can update them during migration. Marcus, review our integrations that hit the database directly—we need to make sure they’ll work on Exadata. Priya, work with Jake on the testing plan. Lisa, start building the migration runbook.”
“What about you?” Jake asked.
“I’m updating our cost model for Harrison’s weekly report,” Maya said. “And drafting the business case for Exadata that we’ll use in the final Week 12 presentation. This is a big deal—$110,000 in annual savings is significant. I want to make sure we communicate the value clearly.”
As the team dispersed to start their work, Jake pulled Maya aside.
“Thank you,” he said quietly.
“For what?” Maya asked.
“For not just ramming this through,” Jake said. “You could have said, ‘We’re migrating to Exadata, deal with it.’ Instead, you made the case, showed me the data, and let me come to my own conclusion. That matters.”
“You’re the database expert,” Maya said simply. “If you thought Exadata was a bad idea, I would have listened. I’m not trying to push technology for technology’s sake. I’m trying to build a better operation. And that only works if the people doing the work believe in it.”
“I believe in it now,” Jake said. “And I’m actually excited about this. For the first time in years, I’m thinking about database work as something more than just keeping the lights on.”
“That’s what I was hoping for,” Maya said. “Now go set up that test environment. I want to see Exadata running by the end of the week.”
Friday Afternoon: First Test
By Friday afternoon, Jake had the Exadata test environment running on Google Cloud. The team gathered to watch the first database restore from their production backup to the new platform.
“This is it,” Jake said, initiating the restore process. “Production database backup from last night, restoring to Exadata. On our old system, this would take about six hours. On Exadata with their snapshot technology…”
They watched the progress indicator.
Forty-two minutes later, the restore was completed.
“Forty-two minutes,” Jake said, staring at the screen. “For a 1.2 terabyte database. That’s… that’s eight times faster than our current restore process.”
“That’s your disaster recovery time,” Maya said. “Forty-two minutes from disaster to running database.”
“Run a query,” Tom suggested. “Let’s see if it actually works.”
Jake opened SQL Developer and connected to the Exadata instance. He ran a complex query that he knew typically took about eight seconds on their production system.
It returned in 4.3 seconds.
“That’s almost twice as fast,” Jake said, running it again to make sure. 4.2 seconds. “Same query, half the time.”
“Is that the Smart Scan feature?” Priya asked.
“Probably a combination of things,” Jake said, pulling up the execution plan. “Flash cache, faster storage, and yeah, some Smart Scan optimization. This query does a full table scan on a large table, which is exactly what Smart Scan helps with.”
He ran another query—a typical PeopleSoft transaction query with lots of index lookups.
Production time: 0.3 seconds
Exadata time: 0.2 seconds
“About 30% faster on OLTP queries too,” Jake noted. “The flash cache is helping with hot data.”
Sarah was watching the Exadata performance monitoring console. “Look at the storage I/O. Those storage servers are doing offload processing—they’re filtering data before sending it to the database layer. That’s really cool architecture.”
“It is,” Jake admitted. “Oracle actually did something smart here. Instead of just throwing faster hardware at the problem, they redesigned the architecture to push processing closer to the data.”
Marcus had a question. “What about our batch jobs? Can we test one?”
“Let’s try the nightly student enrollment batch,” Jake said. “In production, that typically runs for two hours. Let me kick it off here with yesterday’s data.”
He started the batch job, and they watched the progress.
One hour and fourteen minutes later, it was completed.
“That’s 38% faster than production,” Jake said, checking the logs. “Same data volume, same processing logic, just running on better infrastructure.”
“So everything’s faster,” Tom summarized. “Restore time, query time, batch time. All better than what we have.”
“So far,” Jake cautioned. “This is one day of testing with a handful of queries. We need weeks of testing with real workloads before I’m comfortable migrating production.”
“But you’re not seeing any red flags?” Maya asked.
“None,” Jake said. “Which honestly surprises me. I expected to find something that didn’t work well. Some queries were slower, and some features were incompatible. But so far? It’s just better.”
“Maybe Oracle actually knows what they’re doing with this product,” Sarah suggested.
“Don’t get crazy,” Jake said with a smile. “But yeah, they did good work here. This is legitimately impressive infrastructure.”
Maya pulled up the week’s summary on the screen.
Week 5: Database Modernization
Decision Made:
Migrate from self-managed Oracle RAC to Exadata Database Service on GCP.
Jake’s full evaluation and recommendation
Team consensus and buy-in
Test Environment:
Exadata Quarter Rack deployed on GCP
Production database restored (42 minutes vs. 6 hours)
Initial performance testing (10-40% faster across workloads)
No compatibility issues identified
Business Value:
Annual cost savings: $110,480
Operational time savings: 74 hours/month (Jake’s time)
Performance improvement: 10-40% depending on workload type
Risk reduction: Automated backups, faster DR, Oracle infrastructure support
Next Steps:
Week 6: Dev database migration and testing
Week 7: QA database migration and extended testing
Week 8-10: Performance validation and production migration planning
“This is what I’m reporting to Harrison on Monday,” Maya said. “One week from ‘we need to evaluate this’ to ‘we have a working test environment and initial positive results.’ That’s execution.”
“Can I add something to the report?” Jake asked.
“Of course.”
“Include that the DBA team fully supports this migration,” Jake said. “I want Harrison to know this isn’t being forced on us. We evaluated it, we tested it, and we believe it’s the right move.”
“That’s valuable,” Maya said, making a note. “I’ll quote you directly in the status report.”
As the team started to pack up for the weekend, Tom pulled Jake aside.
“Hey, I wanted to say something. Your presentation on Wednesday was really good. You could have just rubber-stamped Maya’s plan, but you did real analysis. That matters.”
“Thanks,” Jake said. “I figured if I’m going to recommend a major migration, I should actually understand what we’re migrating to.”
“I think you surprised Maya a little,” Tom said. “In a good way. She was probably expecting more resistance.”
“I surprised myself,” Jake admitted. “Monday morning, I was ready to dig in and fight this. But the more I looked at it objectively, the more I realized the fight was about ego, not engineering. I didn’t want to admit that there might be a better way to run databases than how I’ve been doing it for twenty years.”
“That’s hard to admit,” Tom said.
“Yeah,” Jake agreed. “But you know what’s harder? Spending the next twenty years doing work I don’t enjoy because I was too proud to change. I’d rather spend my time optimizing SQL than babysitting backups. If Exadata lets me do that, I’m all for it.”
They walked out together, leaving the conference room with its whiteboard full of Exadata architecture diagrams and performance benchmarks.
Maya stayed behind for a moment to update her timeline.
Five weeks down. Seven weeks to go.
Week 1: Honest assessment and baseline
Week 2-3: Infrastructure as Code
Week 4: Observability
Week 5: Database modernization decision
They were more than on track. They were ahead.
And more importantly, her team was evolving. Jake had gone from defensive to analytical to advocating for change. That transformation—from “we lose control” to “this is legitimately better”—was exactly what Maya needed to see.
Because the twelve-week challenge wasn’t just about proving PeopleSoft could be modernized.
It was about proving that traditional IT teams could modernize themselves.
One mindset shift at a time.
Technical Takeaway: The Managed Database Services Decision Framework
Jake’s journey from resistance to advocacy illustrates the critical decision framework for PeopleSoft DBAs evaluating managed database services:
The Core Question
Not: “Can we manage databases ourselves?”
But: “Should we manage database infrastructure ourselves?”
These are fundamentally different questions with different answers.
The Traditional DBA Value Proposition
For decades, DBAs justified their value through operational tasks:
Managing backups and recovery
Applying patches and updates
Configuring high availability
Monitoring disk space and storage
Troubleshooting infrastructure issues
Performing routine maintenance
This work is necessary. It’s also increasingly commoditized.
The Managed Services Shift
Managed database services (Oracle Exadata, AWS RDS, Azure SQL Database, Google Cloud SQL), and automated operational tasks:
Backups happen automatically with point-in-time recovery.
Patches apply with zero downtime on managed schedules.
High availability is built in and automated.
Storage scales automatically
The provider handles infrastructure monitoring.
Routine maintenance is automated.
This shift terrifies traditional DBAs because it appears to eliminate their value.
The Reality: Strategic vs. Operational Work
DBAs create value in two categories:
Operational Work (automatable):
Backup management
Patch application
Storage management
Infrastructure monitoring
Routine maintenance
Disaster recovery execution
Strategic Work (not automatable):
SQL query optimization
Execution plan analysis
Index strategy and design
Data modeling and architecture
Performance troubleshooting
Capacity planning based on business trends
Developer education and support
Database platform evolution
Managed services eliminate operational work. They amplify strategic work.
The Time Allocation Analysis
Most PeopleSoft DBAs spend:
60-80% of time on operational tasks
20-40% of time on strategic work
With managed services:
5-10% of time on operational oversight
90-95% of time on strategic work
Same total hours. Radically different value creation.
The Cost Model
Self-Managed Database Total Cost:
Infrastructure (servers, storage, networking)
Software licenses and support
Data center (space, power, cooling)
DBA operational time
Infrastructure team support
Opportunity cost of DBA time on toil
Managed Database Service Total Cost:
Service subscription (includes infrastructure, software, support)
DBA strategic time
Training and skill development
For most organizations, Managed services cost 30-50% less than self-managed while delivering better performance and reliability.
The Performance Reality
Managed database services often perform better than self-managed:
Why?
Specialized hardware (Exadata, custom AWS instances)
Optimized configurations based on millions of workloads
Advanced features (Smart Scan, storage offload, flash cache)
Regular performance tuning by provider engineers
Latest patches and optimizations are applied automatically.
When self-managed might be better:
Highly specialized configurations requiring deep customization
Regulatory requirements preventing cloud deployment
Existing infrastructure with capacity to spare
Extremely cost-sensitive scenarios (rare)
For typical PeopleSoft workloads, Managed services match or exceed self-managed performance.
The Control Argument
What you actually lose:
Infrastructure-level configuration (storage, networking)
Ability to apply custom patches outside provider schedules
Direct hardware access for diagnostics
What you retain:
Database-level configuration and tuning
SQL optimization and query control
Schema design and modifications
User and security management
Application-level performance tuning
The key insight: Most of what DBAs think they need control over, they rarely actually change.
The Career Development Perspective
Traditional DBA skills are decreasing in value:
Manual backup/recovery procedures
Infrastructure hardware troubleshooting
Physical storage management
On-premise high availability configuration
Modern DBA skills are increasing in value:
SQL performance optimization (always relevant)
Cloud database architecture
Multi-database platform expertise
Data modeling and design patterns
Performance analysis and tuning
Developer enablement and education
Database automation and tooling
Managed services force DBAs to develop higher-value skills.
The Decision Framework
Evaluate managed services when:
Operational overhead exceeds 50% of DBA time.
Infrastructure is aging and needs replacement.
Disaster recovery is difficult or untested.
Team lacks deep infrastructure expertise.
The organization is moving to the cloud.
Want to reduce operational risk.
Proceed with managed services if:
Performance testing shows equivalent or better results.
Cost analysis shows savings (usually 30-50%)
Team is willing to adapt skillsets.
Provider supports your database platform and version.
Regulatory requirements are met.
Stay with self-managed if:
Truly unique configuration requirements exist
In-house expertise dramatically exceeds provider capability (rare)
Regulatory constraints prevent cloud deployment
Cost analysis definitively favors self-managed (very rare)
The Implementation Approach
Phase 1: Evaluation (1-2 weeks)
Deep dive on provider capabilities
Compatibility verification
Feature gap analysis
Cost modeling
Phase 2: Proof of Concept (2-3 weeks)
Deploy test environment
Restore production data
Performance benchmarking
Functionality validation
Phase 3: Migration Planning (2-3 weeks)
Detailed migration runbook
Rollback procedures
Team training
Risk mitigation planning
Phase 4: Staged Migration (4-6 weeks)
Migrate the dev environment
Extended testing in dev
Migrate QA environment
Performance validation in QA
Production migration with careful monitoring
Phase 5: Optimization (ongoing)
Leverage advanced provider features
Tune for specific workloads
Continuous performance improvement
The Team Transformation
Successful managed services adoption requires:
Leadership:
Clear communication about role evolution
Career development support
Training budget allocation
Patience during the learning curve
DBAs:
Willingness to learn new tools and approaches
Focus on strategic skill development
Embrace of automation (not resistance)
Trust in provider capabilities
Organization:
Recognition that value shifts from operational to strategic
Investment in DBA skill development
Support for new ways of working
The Bottom Line
The decision between managed and self-managed databases isn’t about technology capabilities.
It’s about organizational priorities:
Choose self-managed if: You want control over infrastructure and are willing to pay (in time and money) for that control.
Choose managed services if: You want to redirect DBA expertise toward strategic work that directly impacts business outcomes.
For most PeopleSoft organizations, managed services are the better choice.
Not because DBAs are unnecessary.
But because DBAs are too valuable to spend their time on operational toil that can be automated.
Jake learned this lesson in Week 5.
The question is: will your DBAs learn it before or after the industry makes the decision for them?



