OCI Daily
Sunday, November 16, 2025
Saturday, May 3, 2025
Sending Email with APEX_MAIL and Mailx using OCI Email Delivery
This is a very common requirement, funny that I've never used it until today. So the requirement is to send emails from my APEX application using APEX_MAIL package. For cloud deployments (especially on Autonomus Database) recommended way is to use Email Delivery service. And of course most customers would like use their own domain as the sender. So here are the steps:
1 Create email domain following Developer Services >> Email Delivery >> Email Domains >> Create Email Domain
2 Add DomainKeys Identified Mail (DKIM)
<prefix>-<shortregioncode>-<yyyymm> format. You can find short region codes here .Click Generate DKIM Record button, it will populate CNAME values, save these values to update your DNS records.
Until your DNS records updated, you will see it inactive.
3 Update your DNS records, add new CNAME. I am using Cloudflare but it can be OCI DNS Management as well.
4 Create Approved Sender.
5 Update DNS records with Sender Policy Framework (SPF).
You can also check the SPF configuration document . It will look like this:v=spf1 include:rp.oracleemaildelivery.com include:ap.rp.oracleemaildelivery.com include:eu.rp.oracleemaildelivery.com ~all
Add a TXT record.
6 Create SMTP credentials following User >> Profile >> Saved Passwords >> SMTP credentials >> Generate Credentials. Save the values as password won't be displayed again.
7 Get SMTP Sending Information by following Developer Sevrvices >> Email Delivery >> Configuration menu path. Copy public endpoint and port information.
8 Test sending email.
Option 1: Use APEX_MAIL
i Connect to your Autonomous Transaction Processing as ADMIN user using SQL client and configure the following SMTP parameters using APEX_INSTANCE_ADMIN.SET_PARAMETER.ii Send a test email using APEX SQL Workshop >> SQL Commands specifying the approved sender.
There was a delay of minutes but I receieved the email.
Option 2: Use Mailx on OEL 8
i Install and configure mailx.Then email was sent almost instantly.
References:
1.Email Delivery Service Documentation
2.Comprehensive Guide to Testing OCI Email Delivery Monir's guide was excellent, basicaly I followed the steps in hios post
3.Integrating Oracle APEX with Email Delivery Emil Delivery service has a good documentation for integrating the service with different applications
4.APEX_MAIL Package specification
5.OCI Regions and Region Keys
Tuesday, December 31, 2024
Back to the basics: Deploying Container Instances Using Container Image From OCI Container Registry
When deploying containers using container instances service with images from a private registry either you have to provide user name and password or you can let container instances pull images from container registry . Here are the steps:
1 Create a dynamic group with Container Instances as the resource type. Add a rule with the following syntax:
ALL {resource.type='computecontainerinstance'}
2 Write the following policy to grant access for the dynamic group:
Allow dynamic-group ContainerInstanceDynamicGroup to read repos in tenancy
Note
CREATE_CONTAINER_INSTANCE work request will fail with the following message if you try to pull the image from a private registry without authentication.
A container's image could not be pulled because the image does not exist or requires authorization.
Back to the basics: Pushing Container Images to Private OCI Container Registry
Once it's configured then it's forgotten until you need it again. So I've changed my laptop and had to reconfigure it again. Here are the steps:
1 Identify your region key from this list.
2 Identify your Object storage namespace from tenancy details page.
3 Identify your user name and build the user name string in the following format:
{tenancy-namespace}/{username} If it's federated the format will be:
{tenancy-namespace}/oracleidentitycloudservice/{username}
4 Use your auth token as password.
Finally it should look like this:
Tuesday, March 19, 2024
Back to the basics: Securing Web Deployments with Load Balancer, Let's Encrypt and CloudFlare
Few months ago I just started a website to publish my experiements and test my coding skills. I posted about it before here . For the very same website I needed to renew my Let's Encrypt certificates. While doing so, I delved into a side track which was a deadend so I decided to post about it for myself as a reminder also for someone who might use it.
I am publishing this under basics as it is a recurring process and an important part of the deployment. Later on I will post about how to automate the process. For now this is following certbot manual process with DNS Challange.
1I am going to use the DNS Challange method, and zone info is served by CloudFlare DNS. I am going to use certbot-dns-cloudflare plugin. For this purpose I need an API Key to allow certbot edit my DNS zone. Here are the steps for that:
2 Once the key is generated, you can test it with curl
And you will get a json similar to this one
3Put the token into cloudflare.ini file
4Run Certbot Container. The below command will mount your local folders inside container, so that your cloudflare.ini file will be accessible, and generated certificate will be also saved.
5Follow on screen prompts
6Find your certificates under /etc/letsencrypt/live/codeharmony.net/ folder.
7Add your certificates to Load Balancer or Certificate Service. Or whereever you manage your certificates.
8Edit your HTTPS listener to use new certificate, either loadbalancer or your http server.
9Inspect certificate using openssl.
References:
1.Certbot User Guide I followed the manual process.
Wednesday, February 28, 2024
Driving Efficiency with LLMs: Transforming Complex Content into Digestible Summaries
Hey everyone,
I recently shared my latest side project with you all here . It's a simple web page where I've been experimenting with various tools, and my latest addition focuses on AI. Lately, I've been diving deep into the realms of AI, machine learning, and data science, exploring the tools and libraries that are making waves in the tech world. Like many, I'm captivated by the possibilities offered by transformers and LLMs, and I'm eager to integrate them into real-world applications to tackle previously unsolvable problems. We have new tools now, let's see what has changed.
Of course, mastering these technologies doesn't happen overnight. I'm dedicating time to different projects, using them as opportunities to learn, experiment, and refine my skills. In my last blog post , I discussed automating invoice entry, or perhaps "assisting" is a better term. The goal was to extract values from documents and streamline data entry, ultimately making life a little easier and saving time.
This time around, I'm delving into LLMs with a focus on summarization. I've developed and deployed a tool capable of summarizing content from YouTube videos, PDF files, or any web page. The concept is straightforward: extract the text and condense it down. To streamline the process, I'm leveraging 🦜️🔗 LangChain under the hood, which simplifies the task. Here's a glimpse into some of the tools and APIs I've utilized:
1 For YouTube transcript extraction, I've tapped into the youtube-transcript-api , allowing users to download transcripts if available. Transcript can be uploaded by the user or auto-generated, and it can be in multiple languages, and it can be translated to other langueages as well. All this can happen on the fly. Transcribing the auido/video yourself is also an option, there are a lot of models trained for this purpose paid like Oracle AI Speech and open source like Whisper from OpenAI, and many more...
2 Processing PDF documents has been an enjoyable challenge. I was already experimenting extensively with PyMuPDF and now PyPDFLoader . One thing I like about python world is there are hundreds of libraries out there, you can find a huge list of document loaders here .
3 Web page processing required leveraging AsyncHtmlLoader and Html2TextTransformer and beautiful soup to sift through the HTML and focus solely on the textual content.
4 Summarization duties are handled by LLMs (such as OpenAI and Cohere), albeit with a limited context window. If the text fits into LLM context it is great, if not then to work around this, I've implemented a technique called "MapReduce ," chunking the text into manageable pieces for summarization, then condensing those summaries further. This might not be required at all in the near future, competition is tough and everyday we witness increasing model parameters and context windows .
Image taken from geeksforgeeks.org 5 I've added a few user-friendly features, such as automatically fetching and displaying thumbnails for pasted links. Whether it's a YouTube video, PDF file, or web page, users will see a visual preview to enhance their experience. YouTube offers thumnails if you know the video_id, which part of of the URL. For PDF files I've created a thumbnail image of the first page. Web pages were a bit tricky, I used headless chromium to get screenshot of the page. This Dockerfile was extremely helpful to achieve this.
6 To keep users engaged while awaiting results, I've enabled streaming via web sockets using socket.io for both console logs and chain responses. Although it does come with a cost, it is highly motivating for user to keep waiting as witnessing things happen under the hood. And with ChatGPT, it kind of became the defacto.
7 And finally everything is nicely packed in a container and deployed on Oracle Cloud behind load balancers secured with TLS/SSL and CloudFlare etc. You can read about the setup here .
As always, I've shared all the code on my GitHub repository , along with the references that aided me at the bottom of this post. I'll also be putting together a quick demo to showcase these features in action. While these concepts may seem generic, practice truly does refine them. Implementing these tools has not only boosted my confidence but also honed my skills for future challanges.
Conclusion
I hope this project serves as inspiration for readers, sparking ideas for solving their own challenges. Perhaps it could fit into any requirement that needs simplifying the review and categorization of lengthy text. Documents like Market Research Reports, Legal Documents, Financial Reports, Training and Educational Materials, Meeting Minutes and Transcripts, Content Curation for your Social Media and even Competitive Intelligence Gathering...
All we need is a little creativity, and courage to tackle our old unsolvable problems with our new tools. Langchain itself is using this to improve documentation quality combined with clustering/classification logic.
If you have any ideas or questions, feel free to discuss them in the comments. Don't hesitate to reach out; together, we might just find the solution you're looking for.
Happy coding!
References:
1.Developing Apps with GPT-4 and ChatGPT Most of the foundation and ideas came from this book, highly recommended for beginners. It is available on O'Reilly platform
2.🦜️🔗 LangChain You will find almost all you need, getting started guide, sample codes for tools, agents, llms, loaders ...
3.PyMuPDF PDF Library
4.MapReduceDocumentsChain
5.Document Loaders
6.html2image for web page thumbnails
7.Socket.io Web Sockets
8.I, Robot for PDF testing
9.Streaming for LangChain Agents Video Tutorial by James Briggs
10.YouTube Transcript API
Monday, February 12, 2024
APEX with AI Services: Automating invoice entry with AI assisted key/value extraction
Since couple of years, almost everyone is talking about AI, trying to understand how this can help, both life and business. OpenAI ChatGPT made a huge impact in our lives, "You are a helpful assistant", thank you! I am using it daily. I have been also playing around with other language models like Llama 2 and actively learning ML from different resources. Hugging Face is a must join platform along with all the material on LangChain . Just these two got me as far as I could train and run my own model to classify my emails within a week, and the results were incredibly better than I could ever expect. Besides I am enjoying this a lot.
Nowadays the number of interested customers is increasing and this post is about a very basic customer use-case, a real one: invoice entry. I know, it doesn't sound interesting, at first I thought my technical consultancy for ERP days are over, but I promise this is not boring. It has new challanges for me (and for most customers) and demonstrates application of AI services to real life problems. So let's dive into it!
Requirements
"...investigating the possibilities of automating / optimizing the reading and processing of PDF documents with the help of Optical Character Recognition (OCR)..."The moment I saw this I could imagine what they wanted. After verifying their ideal soution in our discovery meeting, we planned for a demo to proove it can work.
Here is a mock design, on the right side we display the PDF file, OCR'ed and all values extracted, and on the left side form is populated with the extracted values. Ideally operator will just click save, and will have a chance to fill in any missing information, huge time saving.
Challenges
Biggest challange is lack of skills, customer knows APEX inside out but I am not an APEX developer. I understand the APEX environment, components and how they work, installed and configured many times. Followed and demonstrated many workshops, but never developed something from scratch. Yet APEX is low code, there are many samples and I was able to complete this in 2 days.
Development
I will briefly mention the steps I've followed and highlight the important parts. Using cloud services makes it easy to start.
1 I start with creating an Autonomous database and APEX workspace. It takes minutes to start worrking on my APEX development.
2 Then I follow this LiveLab workshop as a starter application. I tweaked the table structure according to my needs, but it gave me the foundation I needed for interacting OCI Document Understanding service using object storage is a good decision.
3 Using Document Understanding service inside APEX was easy.
I have added API endpoint in application definition.
4 I created a new page, with 3 regions. Two side by side, left for showing/entering extracted values, right an iframe to display PDF file, last for invoice lines, as designed in the mock wireframe.
For displaying PDF inline on the right side of the page I followed instructions in this YouTube video . The only thing is I didn't have a link item on the same page, but the ID has to come as a page parameter. So I added the link on my home page where all uploaded files are listed, and passed the document_id as page parameter. Then created a new Page Load Dynamic Action to get ID and trigger PDFViewer Action to display the file. After changing the theme to Redwood (with some modifications to make it dark) the application looks like this:
Conclusion
APEX and AI Services is a very powerful combination, that can help you boost the productivity. Please share your ideas about what you think and of course new use-cases in the comments, maybe we can build one together!
References:
1. LiveLabs: Use OCI Object Storage to Store Files in Oracle APEX Applications
2. OCI Developer Documentation: Document Understanding API
3. OCI Developer Documentation: Key Value Extraction (Invoices)
4. Oracle Blog: How to Store, Query, and Create JSON Documents in Oracle Database
5. Oracle Developer YouTube Channel: PDF Viewer in Oracle APEX
6. Banner Image Credit: Dall-e 3
7. YouTube Background Soundtrack: BenSound: Royalty Free Music for Videos
8. Postman Collections: OCI REST API Postman Collection
Featured
Putting it altogether: How to deploy scalable and secure APEX on OCI
Oracle APEX is very popular, and it is one of the most common usecases that I see with my customers. Oracle Architecture Center offers a re...





