How to get count/length on Filter Array function when query into Sharepoint List items using Ms Automate

Hola,

I have a scenario project that need to query from my Organization contact list which if the requestor create a new request then the approval which triggered from Microsoft Automate / Flow then approval users it will taken from my Organization contact list

The Organization Contact List schema :

The MS Flow Process as below :

length(Body(‘Filter_array’)) : this is the function that i am need to get total my query from result Filter Array.

Happy Sharepoint-ing

Change Default SSH Port in Ubuntu

By default SSH Port uses port number 22. But if you want to change the default SSH port in Ubuntu, perform the following steps with root privileges:

  1. Open the /etc/ssh/sshd_config file and locate the line #Port 22
sudo nano /etc/ssh/sshd_config

2. Then, uncomment (Remove the leading # character) it and change the value with an appropriate port number (for example, 4022) then save the configuration

3. Restart service SSH

systemctl restart sshd

Convert HTML String To PDF Via iText Library And Download

I decided post this code in this blog for my reference later if i go back again as Developer 🙂 This Project was a Sharepoint 2016 Project and still used web form asp.net application.

  1. Code the ascx (Usercontrol)

When the user is directed to this page, the user control automatically creates some html data regarding the information sent. Then after information are done, if you check on javascript code below, #btnprint triger to click and that button and execute javascript function PrintElem.

The PrintElem function is to collect the HTML code (see on the var printContents) then transfer to control txtPrint. After are done then call server side function btnExportToPdf_Click (__doPostBack(‘<%=btnExportToPdf.UniqueID%>’, ”);)

2. Reference library

HtmlAgilityPack and iText 7, you can download it from manage nuget on your visual studio.

3. Code

protected void btnExportToPdf_Click(object sender, EventArgs e)
{
StringBuilder contents = new StringBuilder(txtPrint.Text);
string mFilePath = string.Empty;

        //fungsi di bawah hanya untuk mendapatkan jumlah page
        using (MemoryStream ms = new MemoryStream())
        {
            HtmlDocument hDocument = new HtmlDocument
            {
                OptionWriteEmptyNodes = true,
                OptionAutoCloseOnEnd = true,
            };

            hDocument.LoadHtml(contents.ToString());
            var closedTags = hDocument.DocumentNode.WriteTo();

            FontProgram fontProgram = FontProgramFactory.CreateFont(@"C:\Windows\Fonts\calibri.ttf");
            PdfFont calibri = PdfFontFactory.CreateFont(fontProgram, PdfEncodings.WINANSI);

            iText.Kernel.Pdf.PdfWriter pdfWriter = new PdfWriter(ms);
            PdfDocument pdfDocument = new PdfDocument(pdfWriter);
            pdfDocument.AddFont(calibri);

            iText.Kernel.Pdf.PdfDocument pdfDoc = new iText.Kernel.Pdf.PdfDocument(pdfWriter);
            pdfDoc.SetDefaultPageSize(iText.Kernel.Geom.PageSize.A4);

            HeadertFooterHandler handler = new HeadertFooterHandler();
            pdfDoc.AddEventHandler(PdfDocumentEvent.START_PAGE, handler);
            handler.setInfo(DocNo);

            using (Document document = new Document(pdfDoc))
            {
                /*document.SetMargins(10, 10, 10, 10);*/
                document.SetMargins(0, 0, 0, 0);
                document.SetFont(calibri);
                document.SetFontSize(8);
                document.SetWordSpacing(0);

                using (var htmlMemoryStream = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(closedTags.ToString())))
                {
                    /* KB : https://itextpdf.com/en/resources/books/itext-7-converting-html-pdf-pdfhtml/chapter-6-using-fonts-pdfhtml */

                    ConverterProperties properties = new ConverterProperties();
                    properties.SetFontProvider(new iText.Html2pdf.Resolver.Font.DefaultFontProvider(true, true, true));

                    HtmlConverter.ConvertToPdf(htmlMemoryStream, pdfDoc, properties);
                }
            }

            //byte[] mByte = ms.ToArray();

            //TR_AFA_HEADERS afa = AfaHeader;
            //mFilePath = string.Format("{0}/{1}/{2}/AFA_{3}_Print.pdf", SPContext.Current.Web.Url, afa.DocumentLibraryType, afa.AfaHeaderID, afa.Afa_Header_Id);

            //bool sts = Utils.SP_UploadDocument(SPContext.Current.Web.Url, mFilePath, mByte, afa.Afa_Number, afa.Afa_Purpose);

            contents = null;

            pdfDoc.Close();
            pdfWriter.Close();
            ms.Close();
        }

        using (MemoryStream ms = new MemoryStream())
        {
            HtmlDocument hDocument = new HtmlDocument
            {
                OptionWriteEmptyNodes = true,
                OptionAutoCloseOnEnd = true,
            };

            hDocument.LoadHtml(contents.ToString());
            var closedTags = hDocument.DocumentNode.WriteTo();

            FontProgram fontProgram = FontProgramFactory.CreateFont(@"C:\Windows\Fonts\calibri.ttf");
            PdfFont calibri = PdfFontFactory.CreateFont(fontProgram, PdfEncodings.WINANSI);

            iText.Kernel.Pdf.PdfWriter pdfWriter = new PdfWriter(ms);
            PdfDocument pdfDocument = new PdfDocument(pdfWriter);
            pdfDocument.AddFont(calibri);

            iText.Kernel.Pdf.PdfDocument pdfDoc = new iText.Kernel.Pdf.PdfDocument(pdfWriter);
            pdfDoc.SetDefaultPageSize(iText.Kernel.Geom.PageSize.A4);

            HeadertFooterHandler handler = new HeadertFooterHandler();
            pdfDoc.AddEventHandler(PdfDocumentEvent.START_PAGE, handler);
            handler.setInfo(DocNo);

            using (Document document = new Document(pdfDoc))
            {
                /*document.SetMargins(10, 10, 10, 10);*/
                document.SetMargins(0, 0, 0, 0);
                document.SetFont(calibri);
                document.SetFontSize(8);
                document.SetWordSpacing(0);

                using (var htmlMemoryStream = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(closedTags.ToString())))
                {
                    /* KB : https://itextpdf.com/en/resources/books/itext-7-converting-html-pdf-pdfhtml/chapter-6-using-fonts-pdfhtml */

                    ConverterProperties properties = new ConverterProperties();
                    properties.SetFontProvider(new iText.Html2pdf.Resolver.Font.DefaultFontProvider(true, true, true));

                    HtmlConverter.ConvertToPdf(htmlMemoryStream, pdfDoc, properties);
                }
            }

            byte[] mByte = ms.ToArray();

            TR_AFA_HEADERS afa = AfaHeader;
            mFilePath = string.Format("{0}/{1}/{2}/AFA_{3}_Print.pdf", SPContext.Current.Web.Url, afa.DocumentLibraryType, afa.AfaHeaderID, afa.Afa_Header_Id);

            bool sts = Utils.SP_UploadDocument(SPContext.Current.Web.Url, mFilePath, mByte, afa.Afa_Number, afa.Afa_Purpose);

            contents = null;

            pdfDoc.Close();
            pdfWriter.Close();
            ms.Close();
        }
        loading_screen.Attributes.Add("style", "display:none");
        Response.Redirect(mFilePath);
    }


}

public class HeadertFooterHandler : IEventHandler
{
    String info;
    public void setInfo(String info)
    {
        this.info = info;
    }
    public String getInfo()
    {
        return info;
    }

    public void HandleEvent(Event @event)
    {
        iText.Kernel.Colors.Color colorGray = new DeviceRgb(128, 128, 128);

        PdfDocumentEvent docEvent = (PdfDocumentEvent)@event;
        PdfPage page = docEvent.GetPage();
        int pageNum = docEvent.GetDocument().GetPageNumber(page);

        iText.Kernel.Geom.Rectangle pageSize = page.GetPageSize();
        PdfDocument pdfDoc = ((PdfDocumentEvent)@event).GetDocument();

        PdfCanvas pdfCanvas = new PdfCanvas(page.NewContentStreamBefore(), page.GetResources(), pdfDoc);

        new Canvas(pdfCanvas, pdfDoc, pageSize)
         //header
         .SetFontSize(4)
         .SetOpacity(2)
         .SetFontColor(colorGray)
         .ShowTextAligned(info, 10, pageSize.GetTop() - 20, TextAlignment.LEFT, VerticalAlignment.MIDDLE, 0)
         .ShowTextAligned(string.Format("{0:dd MMM yyyy HH:mm:ss}", DateTime.Now), 10, 30, TextAlignment.LEFT, VerticalAlignment.MIDDLE, 0)
         .ShowTextAligned(string.Format("Page {0} of {1}",pageNum.ToString(), docEvent.GetDocument().GetNumberOfPages()), pageSize.GetWidth() - 60, 30, TextAlignment.RIGHT, VerticalAlignment.MIDDLE, 0);
    }
}

Sample screenshot

Happy Sharepoint-ing

Install SQL Server 2019 Standart On Infinys Cloud with OS Ubuntu 18.04 LTS

MS SQL is a relational database system by Microsoft that was open-sourced in 2016. In this article, I’ll take you through the steps to install MS SQL server 2019 on Ubuntu 18.04 Linux system.

Below are the minimum system requirements to run MS SQL on Ubuntu 20.04/18.04/16.04 server:

  • Minimum memory of 2 GB
  • CPU processor with a minimum speed of 1.4 GHz. But the recommended is >= 2 GHz
  • SQL Server requires a minimum of 10 GB of available hard-disk space

The is following server specifications that we used for SQL Server 2019 :

A. Step-step to Install SQL Server 2019 :

  1. Import the public repository GPG keys:
wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

2. Register the Microsoft SQL Server Ubuntu repository for SQL Server 2019

sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/18.04/mssql-server-2019.list)"

3. Run the following commands to install SQL Server:

sudo apt-get update  
sudo apt-get install -y mssql-server

4. After the package installation finishes, run mssql-conf setup and follow the prompts to set the SA password and choose your edition.

sudo /opt/mssql/bin/mssql-conf setup

Because we have a SQL Server Licence, so i choose option setup number 8, then entering the licence product key

5. Once the configuration is done, verify that the service is running

systemctl status mssql-server --no-pager 

If you plan to connect remotely, you might also need to open the SQL Server TCP port (default 1433) on your firewall.

At this point, SQL Server 2019 is running on your Ubuntu machine and is ready to use!

B. Install the SQL Server command-line tools Server 2019 :

To create a database, you need to connect with a tool that can run Transact-SQL statements on the SQL Server. The following steps install the SQL Server command-line tools: sqlcmd and bcp.

Use the following steps to install the mssql-tools on Ubuntu.

1.Import the public repository GPG keys.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

2. Register the Microsoft Ubuntu repository.

curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list

/18.04 = depending of your ubuntu version.

3. Update the sources list and run the installation command with the unixODBC developer package. For more information, see Install the Microsoft ODBC driver for SQL Server (Linux).

sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install msodbcsql17

# optional: for bcp and sqlcmd
sudo ACCEPT_EULA=Y apt-get install mssql-tools echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc source ~/.bashrc
# optional: for unixODBC development headers
sudo apt-get install unixodbc-dev

C. Connect locally use SQLCMD

1.Run sqlcmd with parameters for your SQL Server name (-S), the user name (-U), and the password (local). 


sqlcmd -S localhost -U SA -P '<yourpasswor>'

2. If Succeed to login, the prompt will be like this below

Then you can try all the SQL Command then, like Create Database, Query, Delete data, or database and so on

D. Connect remotly Using SQL Management Studio

If you plan to connect remotely, you might also need to open the SQL Server TCP port (default 1433) on your firewal

E. Reset SQL Server SA Password (If needed)

1. Connect SQL Server using command-line tool with the existing password to make sure that your current password is working. You will skip this step if you don’t know the password.

sqlcmd -S <SQLInstanceName>-U <UserName> -P <Password>

2. To change the “sa” password, first stop SQL Server service on Linux:

sudo systemctl stop mssql-server
sudo systemctl status mssql-server

3. Reset the “sa” password with new one

/opt/mssql/bin/mssql-conf set-sa-password

4. Start and verify the status of SQL Server Service:

sudo systemctl start mssql-server
sudo systemctl status mssql-server

5. Wola!. You are done to change the password.

Reference :

Infinys Cloud System https://isi.co.id/

Thank You.

Room Booking Reservation. Integration with Sharepoint 365, Power Apps, Power Automate, Ms Team, and ICS file.

After reading all Power Apps, and Power Automate tutorial from youtube and others. Finally this sample project almost 90% completed, need to little bit update the format of ICS file then all done.

I am really thank you to
April Dunnam https://www.youtube.com/watch?v=DU_d71ykRJA to her tutorial. Her amazing video tutorial is really inspired me and will explore this integration more deep.

Here are the details for this projects :

  1. Business Process

  • All user that have access to that site able to create new Room Booking list.
  • Once submiteed, automatically triggered Power Automate and Send Approval to All GA Members
  • GA Members are taken from GA Members Group from Sharepoint Group permission
  • If Approved, create a ICS File, then create a record on Event (Calendar) list, and then send email to Requestor, Attendees, Exteral Attendees with ICS File as Attachment
  • If Rejected, Send email to Requestor that request has been rejected.

2. Room Booking Form Accessing from Sharepoint site (form already customed by Power Apps)




3. Room Booking Accessing via Mobile Apps

Click Request Room Booking

4. Integration Power Apps with Ms Team

5. Event Calendar

6. Power Automate / MS FLow

7. Sample Email Approval

Happy Sharepoint-ing

Capital expenditure (CapEx) versus operational expenditure (OpEx)

In the past, companies needed to acquire physical premises and infrastructure to start their business. There was a substantial up-front cost in hardware and infrastructure to start or grow a business. Cloud computing provides services to customers without significant upfront costs or equipment setup time.

These two approaches to investment are referred to as:

  1. Capital Expenditure (CapEx): CapEx is the spending of money on physical infrastructure up front, and then deducting that expense from your tax bill over time. CapEx is an upfront cost, which has a value that reduces over time.
  2. Operational Expenditure (OpEx): OpEx is spending money on services or products now and being billed for them now. You can deduct this expense from your tax bill in the same year. There’s no upfront cost. You pay for a service or product as you use it.

CapEx computing costs

A typical on-premises datacenter includes costs such as:

Server costs

This area includes all hardware components and the cost of supporting them. When purchasing servers, make sure to design fault tolerance and redundancy, such as server clustering, redundant power supplies, and uninterruptible power supplies. When a server needs to be replaced or added to a datacenter, you need to pay for the computer. This can affect your immediate cash flow because you must pay for the server up front.

Storage costs

This area includes all storage hardware components and the cost of supporting it. Based on the application and level of fault tolerance, centralized storage can be expensive. For larger organizations, you can create tiers of storage where more expensive fault‐tolerant storage is used for critical applications and lower expense storage is used for lower priority data.

Network costs

Networking costs include all on-premises hardware components, including cabling, switches, access points, and routers. This also includes wide area network (WAN) and Internet connections.

Backup and archive costs

This is the cost to back up, copy, or archive data. Options might include setting up a backup to or from the cloud. There’s an upfront cost for the hardware and additional costs for backup maintenance and consumables like tapes.

Organization continuity and disaster recovery costs

Along with server fault tolerance and redundancy, you need to plan for how to recover from a disaster and continue operating. Your plan should consist of creating a disaster recovery site. It could also include backup generators. Most of these are upfront costs, especially if you build a disaster recovery site, but there’s an additional ongoing cost for the infrastructure and its maintenance.

Datacenter infrastructure costs

These are costs for construction and building equipment, as well as future renovation and remodeling costs that may arise as demands grow. Additionally, this infrastructure incurs operational expenses for electricity, floor space, cooling, and building maintenance.

Technical personnel

While not a capital expenditure, the personnel required to work on your infrastructure are specific to on-premises datacenters. You will need the technical expertise and workforce to install, deploy, and manage the systems in the datacenter and at the disaster recovery site.

OpEx cloud computing costs

With cloud computing, many of the costs associated with an on-premises datacenter are shifted to the service provider. Instead of thinking about physical hardware and datacenter costs, cloud computing has a different set of costs. For accounting purposes, all these costs are operational expenses:

Leasing software and customized features

Using a pay-per-use model requires actively managing your subscriptions to ensure users do not misuse the services, and that provisioned accounts are being utilized and not wasted. As soon as the provider provisions resources, billing starts. It is your responsibility to de-provision the resources when they aren’t in use so that you can minimize costs.

Scaling charges based on usage/demand instead of fixed hardware or capacity.

Cloud computing can bill in various ways, such as the number of users or CPU usage time. However, billing categories can also include allocated RAM, I/O operations per second (IOPS), and storage space. Plan for backup traffic and disaster recovery traffic to determine the bandwidth needed.

Billing at the user or organization level.

The subscription (pay-per-use) model is a computing billing method that is designed for both organizations and users. The organization or user is billed for the services used, typically on a recurring basis. You can scale, customize, and provision computing resources, including software, storage, and development platforms. For example, when using a dedicated cloud service, you could pay based on server hardware and usage.

Benefits of CapEx

With capital expenditures, you plan your expenses at the start of a project or budget period. Your costs are fixed, meaning you know exactly how much is being spent. This is appealing when you need to predict the expenses before a project starts due to a limited budget.

Benefits of OpEx

Demand and growth can be unpredictable and can outpace expectation, which is a challenge for the CapEx model as shown in the following graph.

A graph showing how expected demand can be different from real demand and how CapEx infrastructure can be exceeded by demand.

With the OpEx model, companies wanting to try a new product or service don’t need to invest in equipment. Instead, they pay as much or as little for the infrastructure as required.

OpEx is particularly appealing if the demand fluctuates or is unknown. Cloud services are often said to be agile. Cloud agility is the ability to rapidly change an IT infrastructure to adapt to the evolving needs of the business. For example, if your service peaks one month, you can scale to demand and pay a larger bill for the month. If the following month the demand drops, you can reduce the used resources and be charged less. This agility lets you manage your costs dynamically, optimizing spending as requirements change.

What is serverless computing?

Serverless computing lets you run application code without creating, configuring, or maintaining a server. The core idea is that your application is broken into separate functions that run when triggered by some action. This is ideal for automated tasks – for example, you can build a serverless process that automatically sends an email confirmation after a customer makes an online purchase.

The serverless model differs from VMs and containers in that you only pay for the processing time used by each function as it executes. VMs and containers are charged while they’re running – even if the applications on them are idle. This architecture doesn’t work for every app – but when the app logic can be separated to independent units, you can test them separately, update them separately, and launch them in microseconds, making this approach the fastest option for deployment.

Here’s a diagram comparing the three compute approaches we’ve covered.

Install ASP.Net Core 3 using nginx on Ubuntu 18.04

Finally after 2 days i have been working with this installation. Finally i am completed install Asp.Net Core 3.0 using Nginx as web server then deploy it of one my project on github project https://github.com/agustox21/MovieListCore3 .

Actually that project taken from other github but currently still using Asp.net Core 2. Becaus i am installed Asp.Net Core 3.0, I decide to modify some code then i published to my github so i able to download from my ubuntu server and deploy it.

Since i already installed Nginx and you can see on this article https://agustox21.wordpress.com/2020/05/06/install-nginx-and-apache-on-the-same-server-on-ubuntu-18-04/. So i will skip the step to install Nginx.

Here are the steps as follow :

1.Apt-get update

Before installing new software, it is strongly recommended to update your local software database. Updating helps to make sure you’re installing the latest and best-patched software available.

sudo apt-get update

2. Install the Prerequisites

wget -q sudo dpkg -i packages-microsoft-prod.deb

Install the .NET SDK

sudo apt-get install apt-transport-https 
sudo apt-get updatesudo apt-get install dotnet-sdk-2.2

To confirm your installation and to check the version of dotnet cli installed on the machine type the following command. You should get an output

dotnet —-version

Output :

root@sandbox01:~# dotnet –version
3.1.202

3.Define Server Blocks for Port 3100

Since my port 80, 2100 already used with other application. I decided to create one application using 3100

Here the syntax :

sudo mkdir /etc/nginx/sites-available/netmoviesapps

Paste in the following configuration block, which is similar to the default, but updated for our new directory and domain name:

server {
listen 3100;
location / {
   proxy_pass http://localhost:5000;
   proxy_http_version 1.1;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection keep-alive;
   proxy_set_header Host $http_host;
   proxy_cache_bypass $http_upgrade;
  }
}

sudo ln -s /etc/nginx/sites-available/netmoviesapps /etc/nginx/sites-enabled/

4. Create a web  Movie List

Create the directory for html_2100 as follows, using the -p flag to create any necessary parent directories:

sudo mkdir -p -p /var/www/movie-app

Next, assign ownership of the directory with the $USER environment variable:

sudo chown -R $USER:$USER /var/www/movie-app

The permissions of your web roots should be correct if you haven’t modified your umask value, but you can make sure by typing:

sudo chmod -R 755 /var/www/movie-app

Now, you can move into the parent directory and clone the application on GitHub:

cd /var/www
git clone https://github.com/agustox21/MovieListCore3.git

Now, to build the project and all its dependencies, run the following command:

cd /var/www/movie-app dotnet build dotnet publish

5. Setting Kestrel process

Kestrel is an open source, cross platform, light weight and a default webserver used for Asp.Net Core applications. Asp.Net Core applications run Kestrel webserver as in-process server to handle web request. Kestrel is cross platform, runs in Windows, LINUX and Mac. Kestrel webserver supports SSL.  To handle this and ensure that the Kestrel process keeps running in the background, you will use systemd functionalities.

Systemd files will allow you to manage a process by providing start, stop, restart, and log functionalities once you create a process of work called a unit.

Move into the systemd directory:

cd /etc/systemd/system

Create a new file for editing:

sudo nano movie.service

[Unit]
Description=Movie app

[Service]
WorkingDirectory=/var/www/movie-app
ExecStart=/usr/bin/dotnet /var/www/movie-app/bin/Debug/netcoreapp3.1/publish/MovieListCore3.dll
Restart=always
RestartSec=10
SyslogIdentifier=movie
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false

[Install]
WantedBy=multi-user.target

Now save the file and enable the new movie service created with:

sudo systemctl enable movie.service

After that, proceed to start the service and verify that it’s running by starting the service:

sudo systemctl start movie.service

Then check its status:

sudo systemctl status movie.service
sudo nginx -s reload

 

Here the screenshot, after all the deployment above completed.

aspnetcore3

Install Cockpit On Ubuntu 18.04 LTS

Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser. It makes Linux discoverable, allowing sysadmins to easily perform tasks such as starting containers, storage administration, network configuration, inspecting logs and so on.

Cockpit is released under the LGPL v2.1+, and it is available for RedhatCentOSDebianUbuntu, Atomic, and Arch Linux.

Feature of Cokpit :

  • Modify the network settings
  • Easily manage the user accounts
  • With the use of sosreport, it can collect system configuration and diagnostic information
  • Connect and Manage multiple systems from a single Cockpit session
  • Gathers system performance using Performance Co-Pilot framework and displays it in a graph.
  • Manage the containers via Docker
  • Provides web-based shell in a terminal

This guide helps you to install Cockpit on Ubuntu :

1.Apt-Update

sudo apt update
sudo apt-get upgrade

2.Install the Cockpit package.

sudo apt -y install cockpit
sudo systemctl start cockpit.socket
sudo systemctl enable cockpit.socket

Once you start the Cockpit service, it will start listening on port 9090. Now, open up your browser and navigate it to below URL.

https://ip-address:9090

Cockpit uses a self-signed SSL certificate for secure communication. So, you need to add an exception in your browser to access the Cockpit.

Login

Cokpit1

 

System

Cokpit2

 

Cron Job and Register New job timer

Cokpit3

Install Cron as Automate Tasks on Ubuntu 18.04

Cron, the thing that always heard on my ears for couple year ago. That  time i was involved with one project that need a job timer and that time one of the developer used Cron Job instead that usualy we always use it with Job Task Schedule, Sharepoint Job Timer or Sql Job.

What is Cron ?

Cron is a time-based job scheduling daemon found in Unix-like operating systems, including Linux distributions. Cron runs in the background and tasks scheduled with cron, referred to as “cron jobs,” are executed automatically, making cron useful for automating maintenance-related tasks.

So on this demo, i will digging a litte bit about how to install Cron on Ubuntu and test it. Here are the steps as follow :

1.Apt-Update

sudo apt update
sudo apt-get upgrade

2. Install Cron

sudo apt install cron

You’ll need to make sure it’s set to run in the background too:

sudo systemctl enable cron

cron1

Done and pretty easy right 🙂  We already installed the Crown and already running as well. So next action are we can use that for scheduling your task job.

Syntax Structure on Cron

Tasks scheduled in a cron are structured like this:

minute hour day_of_month month day_of_week command_to_run

Here’s a functional example of a cron expression. This expression runs the command curl http://www.google.com every Tuesday at 5:30 PM:

30 17 * * 2 curl http://www.google.com

There are also a few special characters you can include in the schedule component of a cron expression to make scheduling easier:

  • *: In cron expressions, an asterisk is a wildcard variable that represents “all.” Thus, a task scheduled with * * * * * ... will run every minute of every hour of every day of every month.
  • ,: Commas break up scheduling values to form a list. If you want to have a task run at the beginning and middle of every hour, rather than writing out two separate tasks (e.g., 0 * * * * ... and 30 * * * * ...), you could achieve the same functionality with one (0,30 * * * * ...).
  • -: A hyphen represents a range of values in the schedule field. Instead of having 30 separate scheduled tasks for a command you want to run for the first 30 minutes of every hour (as in 0 * * * * ...1 * * * * ...2 * * * * ..., and so on), you could just schedule it as 0-29 * * * * ....
  • /: You can use a forward slash with an asterisk to express a step value. For example, instead of writing out eight separate separate cron tasks to run a command every three hours (as in, 0 0 * * * ...0 3 * * * ...0 6 * * * ..., and so on), you could schedule it to run like this: 0 */3 * * * ....

Here are some more examples of how to use cron’s scheduling component:

  • * * * * * – Run the command every minute.
  • 12 * * * * – Run the command 12 minutes after every hour.
  • 0,15,30,45 * * * * – Run the command every 15 minutes.
  • */15 * * * * – Run the command every 15 minutes.
  • 0 4 * * * – Run the command every day at 4:00 AM.
  • 0 4 * * 2-4 – Run the command every Tuesday, Wednesday, and Thursday at 4:00 AM.
  • 20,40 */8 * 7-12 * – Run the command on the 20th and 40th minute of every 8th hour every day of the last 6 months of the year.

The more detail this you can read on this article https://crontab.guru/ and here https://linuxhint.com/run_cron_job_every_minute/