setting up remote access Postgres DB on Ubuntu

Postgres database if we read from wikipedia https://en.wikipedia.org/wiki/PostgreSQL is a powerful relational database management system, it can handle large workloads from a single machine to that of a data center. It is highly scalable and widely popular. Currently in our Odoo ERP development we are more often now using Postgres database as default database for our Odoo. In default installation the database is only able to remote on localhost environment, and in this article i will show to you how to setting up remote access, so other IP or client can have access to remote.

Here are the steps :

  • Allow remote access

Note : In this step you need to update file postgresql.conf and pg_hba.conf

  1. postgresql.conf

In this step, we will look at how to configure Postgres to accept external connections. To begin, open the configuration file with your preferred editor:

nano /etc/postgresql/10/main/postgresql.conf

Look for this line in the file:

#listen_addresses = ‘localhost’

Uncomment, and change the value to ‘*’, this will allow Postgres connections from anyone.

listen_addresses = ‘*’

Save and exit the file.

2. pg_hba.conf

Next, modify pg_hba.conf to also allow connections from everyone. Open the file with your preferred editor:

nano /etc/postgresql/10/main/pg_hba.conf

Modify this section:

# IPv4 local connections:
host all all 127.0.0.1/32 md5

To this:

# IPv4 local connections:
host all all 0.0.0.0/0 md5

This file stores the client authentication, each record specifies an IP address range, database name, username, and authentication method. In our case, we are granting all database users access to all databases with any IP address range, thus, letting any IP address connect. Save and exit the file

  • Allow port 5432

Allow port 5432 through the firewall by executing:

sudo systemctl restart postgresql

  • Restart Postgres

Restart Postgres to apply all the changes you have made to its configuration by running:

sudo systemctl restart postgresql

  • Create a User for remoting access

Actually with all steps above, we already finished all the procedures to allow remote access. But like our case now, we need to create one user to our purpose for remoting. Here as steps to create user and granting that user for remoting :

sudo -u postgres psql

Create user, example user with username python

CREATE USER python with PASSWORD ‘python’;

Grant access this user, in this step we will use statement ALTER ROLE

ALTER ROLE python superuser;

To view the role use the following command:

\du python

Here are the following functions are available with ALTER ROLE statement:

  • SUPERUSER | NOSUPERUSER – It is used to determine if the role is a superuser.
  • VALID UNTIL ‘timestamp’ – It is used to specify the expiry date and time of a role’s password.
  • CREATEROLE | NOCREATEROLE –It is used to provide permissions to a role for creating or modifying roles.
  • PASSWORD ‘password’ | PASSWORD NULL – It is used to change a role’s password.
  • INHERIT | NOINHERIT – It is used to determine if the inherited role has all the inherited privileges of the parent role.
  • BYPASSRLS | NOBYPASSRLS – It is used to check if a role can bypass a row-level security (RLS) policy.
  • LOGIN | NOLOGIN – As the name suggests itself, it is used to allow the role to log in.
  • CONNECTION LIMIT limit – It is used to set the number of concurrent connections that a role can make. Here -1 means the role can create an unlimited number of rows.

The following are the set of rules that must be followed while using the ALTER ROLE statement:

  • Superusers can modify the attributes for any role.
  • Only non-superusers and no-replication roles can be modified is a role has CREATE ROLE attribute to it.
  • Ordinal roles can only change their passwords.

Knowledge Base :

Cheers

Deploy your ASP.NET Core Web Application in IIS

When we try to deploy an ASP.NET Core application using the self-hosted approach via the dotnetcore CLI, the result is a kind-of a windows service. You have your application running inside a terminal (or a command-prompt) window listening over a configured PORT for requests.

When you close this terminal (or the command-prompt) the application shuts down, which is kind-of meh when you want to run a production-grade API or Web application.

This approach is such that any request from outside world is received by the hosting web server (such as IIS, Ngnix or Apache) which internally routes the request to the application running inside a kestrel web server module.

This approach provides an additional layer of security and abstraction, in addition to making use of the additional features of a full-fledged web server such as IIS or Ngnix which offer much more such as server variables or preprocessing or others. For deploying an ASP.NET Core application to a webserver we require to install the below things to keep things going:

Here’s a step-by-step process to deploy your ASP.NET Core application in IIS :

  1. Configure IIS

By default IIS is turn off, so we need to enable it this feature by checking into the Turn Windows Feature On / Off.

2. Download and install the ASP.NET Core Web Hosting Bundle: this provides all the necessary modules required for communication between the IIS and the Kestrel in which the application resides and works. You can download the hosting bundle from the official ASP.NET website

3. Once the Hosting bundle is installed and the IIS is running, type inetmgr in the Run (Windows + R) window. The IIS manager window opens. Expand on the Server that shows up in the left panel and right click on Application Pools. Select Add Application Pool.

An Application Pool is an IIS Process on which the applications reside and execute. On the New Application Pool dialog, enter the app pool name and select “No Managed Code” from the list. And then click on Save. A new Process which can handle an ASP.NET Core application is now created. Now we shall create a new website and add it to the created app pool.

4. Copy the application executables into a folder on top of which we shall create a website in IIS. We obtain the binaries by publishing our application with the below command within the project directory.

> dotnet publish -c Release

This generates the executables into the output path under /bin/release/netcoreappX.Y/publish/ path.

Copy all that content into the path C:\inetpub\wwwroot\MyAspNetCoreApp\ folder. Observe that once we enable IIS in our hosting machine, the inetpub directory with all its sub directories is created. This is where the IIS manager operates by default for the web applications.

  1. Once the contents are copied, create a Website and use the Physical Path above to point for under which the web application operates.

In the left panel that shows server details, right click on the Sites folder and select Add Websites. This is a container in which the application executables reside and run. A dialog appears which is configuration for the to-be-created website.

Specify the website name and the Physical Path under which we have placed the executables just now. And click on “select” next to the Application Pools and select the App Pool we created previously. Click on Save to create the website, and now the executables run under the container website using the process as in the app pool. To check the website, click on Browse option that occurs on the right side panel.

Now we should see our readersApi application up and running similar to any website. If something goes wrong, we get the error screen with 502.5 error code.

How to troubleshoot a broken IIS Deployment? (502.5 errors)

For that, we can have a few extra steps within our code, to troubleshoot in such scenarios.

In Program.cs when we create the WebHostBuilder for building the application, we can add two extra chaining methods which can help provide the stacktrace for us.

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
	.UseIISIntegration()
	.UseKestrel()
	// add additional chain methods which capture startup errors 
	.UseSetting("detailedErrors", "true")
	.CaptureStartupErrors(true)
        .UseStartup<Startup>();

or in the newer dotnetcore3.x we have

Host.CreateDefaultBuilder(args)
	.ConfigureWebHostDefaults(wb =>
		{
			wb.UseStartup<Startup>();
			wb.UseSetting("detailedErrors", "true");
			wb.CaptureStartupErrors(true);
			wb.UseIISIntegration();
		}
	);

In either case, when we add CaptureStartupErrors() chain method to the WebHost, any errors which occur during the app startup is captured by the IIS and put into Logs which can be viewed via Windows Event Viewer. This can help in getting a clear picture of what is happening during the application startup.

Pro tip: If all of these fails, we can use the dotnet core CLI hack to run it as a self-hosted application and see for ourselves what has gone wrong. Within the physical directory of the executables, we can run:

dotnet ./readersapi.dll --urls=http://0.0.0.0:5000 

which can run the application as a self-hosted application under the url http://localhost:5000 or the http://ip-of-the-system:5000

This can easily help us troubleshoot the issues. But the downside is that this approach works only when we have total control over the hosting system environment.

But when we are deploying over to a remote host environment or a cloud environment such as Azure app service, the above two method chains help us accurately find the issues.

Related link : https://referbruv.com/blog/posts/hosting-aspnet-core-app-in-iis-getting-started-and-troubleshooting-issues

Docker

Docker becoming trend around this year and becoming standard on IT Industry for deployment of software applications.

I am also as developer but mostly on my previous job i couldnt learn this technology, because Sharepoint On-Premise currently not yet support on this Docker Environment.

After i joined this company Infinys System Indonesia https://infinyscloud.com/id, i have a lot learn about docker from team developer in here, because some deployment of our website it built on top of Docker.

The top reason enterprises are using Docker is to help them deploy across multiple systems, migrate applications, and remove manual reconfiguration work. Because application dependencies are built into containers, Docker containers significantly reduce interoperability concerns.

In this article, I’ll share what I have learned so far about docker.

  • What is docker?
  • Components of Docker
  • Comparison Virtual Machine vs Docker
  • When to use Container vs VMs
  • Basic Commands on Docker

A. What Is Docker

Docker delivers software in containers, which simplifies the process by packaging everything it takes to run an application.

There are numerous advantages to using containers to deploy applications.

  • Isolated — Applications have their own libraries; no conflicts will arise from different libraries in other applications.
  • Limited (limits on CPU/memory) — Applications may not hog resources from other applications.
  • Portable — The container contains everything it needs and is not tied to an OS or Cloud provider.
  • Lightweight — The kernel is shared, making it much smaller and faster than a full OS image.

B. Component Of Docker

  • Docker file is a text document that contains necessary commands which on execution helps assemble a Docker Image.
  • Docker Image it is a set of instructions which is used to build containers, consisting of application code with all the dependencies and libraries. It is portable so it can easily be shared between developers and operators.
  • Container is a way of packing application with all its dependencies and configuration files. A container is a running instance of our image.
  • Docker Engine supports the tasks and workflows involved to build, ship and run container-based applications. The engine creates a server-side daemon process that hosts images, containers, networks and storage volumes. Docker Engine is a client-server based application.
  • Docker CLI is a command line tool that lets you talk to the Docker daemon.

C. Comparision Virtual Machine vs Docker

Docker containers and virtual machines are both ways of deploying applications inside environments that are isolated from the underlying hardware. The chief difference is the level of isolation.

Link : https://www.weave.works/blog/a-practical-guide-to-choosing-between-docker-containers-and-vms

With a container runtime like Docker, your application is sandboxed inside of the isolation features that a container provides, but still shares the same kernel as other containers on the same host. As a result, processes running inside containers are visible from the host system (given enough privileges for listing all processes). For example, if you start a MongoDB container with Docker, then run ps -e | grep mongo in a regular shell on the host (not in Docker), the process will be visible. Having multiple containers share the same kernel allows the end user to bin-pack lots and lots of containers on the same machine with near-instant start time. Also, as a consequence of containers not needing to embed a full OS, they are very lightweight, commonly around 5-100 MB.

In contrast, with a virtual machine, everything running inside the VM is independent of the host operating system, or hypervisor. The virtual machine platform starts a process (called virtual machine monitor, or VMM) in order to manage the virtualization process for a specific VM, and the host system allocates some of its hardware resources to the VM. However, what’s fundamentally different with a VM is that at start time, it boots a new, dedicated kernel for this VM environment, and starts a (often rather large) set of operating system processes. This makes the size of the VM much larger than a typical container that only contains the application.

D. When to use Container vs VMs

Containers are a good choice for the majority of application workloads. Consider containers in particular if the following is a priority:

Start time
Docker containers typically start in a few seconds or less, whereas virtual machines can take minutes. Thus, workloads that need to start very quickly, or that involve spinning apps up and down constantly, may be a good fit for Docker.

Efficiency
Because Docker containers share many of their resources with the host system, they require fewer things to be installed in order to run. Compared to a virtual machine, a container typically takes up less space and consumes less RAM and CPU time. For this reason, you can often fit more applications on a single server using containers than you could by using virtual machines. Likewise, due to their lower levels of resource consumption, containers may help to save money on cloud computing costs.

Licensing
Most of the core technologies required to deploy Docker containers, including container runtimes and orchestrators like Kubernetes, are free and open source. This can lead to cost savings while also increasing flexibility. (But it’s worth noting that in many cases organizations will use a commercial distribution of Docker or Kubernetes in order to simplify deployment and obtain professional support services.)

Code reuse
Each running container is based on a container image, which contains the binaries and libraries that the container requires to run a given application. Container images are easy to build using Dockerfiles. They can be shared and reused using container registries, which are basically repositories that host container images. You can set up an internal registry to share and reuse containers within your company. Thousands of prebuilt images can be downloaded from public registries (e.g. Docker Hub or Quay.io) for free and used as the basis for building your own containerized applications.

Of course, VMs may be packaged into images, too, and those images can also be shared, but not as efficiently and easily as containers. Furthermore, virtual machine images aren’t as easy to automatically build, and are typically larger in size. Also, because they usually include operating systems, redistributing them can become legally complicated. (In most cases you can’t legally download and run a virtual machine image with Windows preinstalled without having a Windows license, for example.)

When to stick with virtual machines
Let’s look at some reasons why you might forgo Docker and stick with your virtual machines.

Security
A full discussion of the security merits of virtual machines as compared to Docker is beyond the scope of this article. But suffice it to say that, essentially, virtual machines are more isolated from each other and from the host system than are Docker containers. That is because virtual machines, as we’ve noted, don’t directly share any kernels or other resources with the host system.

For this reason, virtual machines are arguably more secure overall than containers. Although Docker provides various tools to help isolate containers and prevent a breach within one container from escalating into others, at the end of the day, containers aren’t isolated from a security perspective in the same way that virtual machines are.

E. Basic Commands on Docker

To get started using Docker, you have to download the Docker Application in this site https://www.docker.com/

Docker running smoothly on top OS Windows (Desktop / Server), Linux and also Mac OS.

After the installation Docker completed, it will appear UI like this below :

and you can as well using CLI to access the docker

For syntax and command on Docker, you can download this pdf file https://www.docker.com/sites/default/files/d8/2019-09/docker-cheat-sheet.pdf

  • docker pull pulls an image from registry to local machine.
  • docker images show images
  • docker run both creates and run a container in single operation
  • docker ps shows running containers
  • docker stop, stops a running container
  • docker ps -a shows running as well as stopped containers
  • docker start, starts a container to run it
  • etc

Happy Docker-ing

Come Now Is The Time To Worship

Come, now is the time to worship
Come, now is the time to give your heart
Come, just as you are to worship
Come, just as you are before your God
Come
One day every tongue will confess You are God
One day every knee will bow
Still the greatest treasure remains for those
Who gladly choose You now
Come, now is the time to worship
Come, now is the time to give your heart
Come, just as you are to worship
Come, just as you are before your God
Come
One day every tongue will confess You are God
One day every knee will bow
Still the greatest treasure remains for those
Who gladly choose You now
Willingly we choose to surrender our lives
Willingly our knees will bow
With all our heart, soul, mind and strength
We gladly choose You now
Come, now is the time to worship
Come, now is the time to give your heart
Come, just as you are to worship
Oh come, just as you are before your God
Come, come, come, oh come
Oh come, come
Bow, you nations
Come and worship
Come and worship your maker

Create first ASP.NET Core App in a Linux Docker Container

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.  (Wikipedia)

Docker itself now currently it is a trend for mostly application deployment. With Docker, we don’t have painfull when to deploy to other environment. Because we just publish our docker with all the configuration that we already setup it before and re-use again when want to deploy it to other machine. With minor of configuration as well (Easy to distribute)

Here as below the my sample application using asp.net core and publish it to docker :

  1. Create new project. Choose the ASP.Net Core web app like on screenshot below

2. Test run the web apps

Yeayy! now you already done to create one asp.net core application 🙂

3. Add Docker support.

To download docker for windows you can go to this link https://www.docker.com/products/docker-desktop. Download and install it.

Dockerfile is a text based file (but having no extension). It contains instructions telling how to assemble a docker image. In Visual Studio you can create Dockerfile effortlessly by right clicking the app name in the solution explorer then select Add ➤Docker Support.

For this project, i choose target os is Linux

5. ASP.NET Core Dockerfile structure

6. Build the Project and Docker will listed on the docker UI.

7. Run the application and choose Docker as environment

Once it is already deployed, you can check all the pages that already deploy it with bash.

8. Push to Hub

Let say we already did all the progamming and setup on this docker. We can push it to the hub, then we can deploy it to other machine which already had also the docker.

Happy Coding

Enable snaps on Linux Mint and install Mysql Workbench Community

Snaps are applications packaged with all their dependencies to run on all popular Linux distributions from a single build. They update automatically and roll back gracefully.

Enable snapd

Snap is available for Linux Mint 18.2 (Sonya), Linux Mint 18.3 (Sylvia), Linux Mint 19 (Tara), Linux Mint 19.1 (Tessa) and the latest release, Linux Mint 20 (Ulyana). You can find out which version of Linux Mint you’re running by opening System info from the Preferences menu.

On Linux Mint 20, /etc/apt/preferences.d/nosnap.pref needs to be removed before Snap can be installed. This can be accomplished from the command line:

sudo rm /etc/apt/preferences.d/nosnap.pref
sudo apt update

sudo apt update
sudo apt install snapd

Either restart your machine, or log out and in again, to complete the installation

Install Mysql Workbench Community

To install just follow this command as below :

sudo snap install mysql-workbench-community

Configure User Profile Service Application and My Site in SharePoint 2019

A. Create User Profile Sync for Active Directory for User AD Syncronization

Example the user account is spSyncConn

  1. Create a user on your Active Directory for User AD Syncronication, let say spSyncConn
  2. The synchronization account must have Replicate Directory Changes permissions at the root of the forest.

B. Configure Synchronization Connections

  • Open Central Admin
  • Click Application Management
  • Navigate to Manage Service Application >> User Profile Service Application
  • Uder Synchronization >> click Configure Synchronization Connection
  • On the Synchronization Connection page >> click on Create New connection

Fill the fields with your environment and/or business needs

  • Click button Populate Container to List all the structure of yor active directory
  • Checked all those that you want to sync it or click all to syncronize all
  • Then Click button OK to save the configuration

C. Create MySite Host

  • Open Central Adminstration
  • Click Create site Collection
  • Fill in up all the information and choose My Site Host as template

  • Fill in the Primary Site Collection Administrator and Secondary as well
  • Choose quota template
  • Click button OK to save the configuration

D. Create My Site Managed Path

We will need to create a Managed Path for the My Site. Let’s use the path “/sites/MySite/Personal” for this purpose. If it is not created already, we can create it going to the Central Administration ->Manage Web Applications -> Managed Paths

In the new Managed Path creation page, add the intended Managed path(“/sites/MySite/Personal”). Specify the type as Wildcard Inclusion and click on OK.

D. Configure My Site

  •  User Profile Service Application from Application Management -> Manage Service Applications .
  • It will open up the User Profile Service Application page. From there, select ‘Setup My Sites’.
  • In the My Site Host section, specify the Site Collection URL for the My Site Host we had created earlier.
  • Under Personal Site Location, specify the Managed Path for the My Site that we had created earlier.

E. Configure Self Service Site Creation

So, one added benefit of My Site is that it enables users to create Site Collections from the My Site Page. We can enable it from Manage Web Applications -> Self Service Site Creation.

Done.

Happy Sharepoint-ing

Benefits Of Cloud Computing

You’re probably using cloud computing services right now, even if you don’t realize it. If you use an online service to send email, edit documents, watch movies or TV, listen to music, play games, or store pictures and other files, it’s likely that cloud computing is making it all possible behind the scenes.

Cloud computing is a significant shift from the traditional way businesses think about information technology (IT) resources.

Following are some reasons why you should turn to cloud computing services:

  • Cost
  • Global
  • Performance
  • Security
  • Scalable, elastic, and flexible
  • Productivity
  • Reliability

Cost

Cloud computing eliminates the capital expense of buying hardware and software. You no longer need to set up and run on-site datacenters with racks of servers. You no longer need round-the-clock electricity for power and cooling, or the IT experts for managing the infrastructure. The cost adds up fast.

This consumption-based model brings with it many benefits, including:

  • No upfront infrastructure costs.
  • No need to purchase, manage, and maintain costly infrastructure that you may not use to its fullest.
  • Pay for additional resources only when they’re needed.
  • Stop paying for resources that are no longer needed.

Global

The benefits of cloud computing services include the ability to scale elastically. In cloud terms, that means delivering the right amount of IT resources—more or less computing power, storage, or bandwidth—right when they’re needed, and from the right geographic location.

Performance

The biggest cloud computing services run on a worldwide network of secure datacenters. The providers regularly upgrade to the latest generation of fast and efficient computing hardware. This configuration offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.

Security

Many cloud providers offer a broad set of policies, technologies, and controls that strengthen your security posture overall. They protect your data, apps, and infrastructure from potential threats.

You have physical security—who can access the building, who can operate the server racks, and so on. You also have digital security—who can connect to your systems and data over the network.

Scalable, elastic, and flexible

Cloud providers offer cloud computing services self-service and on demand. You can provision vast amounts of computing resources in minutes, typically with just a few mouse clicks. Cloud computing gives your business flexibility and takes the pressure off capacity planning.

Cloud computing supports both vertical and horizontal scaling, depending on your needs:

  • Vertical scaling, also known as scaling up, is the process of adding resources to increase the power of an existing server. Some examples of vertical scaling include adding more CPUs and adding more memory to support increased data collection.
  • Horizontal scaling, also known as scaling out, is the process of adding more servers that function together as one unit. For example, you have more than one server processing incoming requests.

Scaling can be done manually or automatically based on specific triggers, such as CPU use or the number of requests and resources that can be allocated or de-allocated in minutes.

Productivity

On-site datacenters often require a racking and stacking hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks. Your IT teams can spend time on achieving more important business goals.

Reliability

Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.

Source :

https://docs.microsoft.com/en-us/learn/modules/principles-cloud-computing-dynamics-365-deployment/4-benefits-cloud-computing

How to make Absence Attendance mobile apps with feature GPS location in 2 days – Power Apps

There are 2 version languages available for this article.

Indonesia Version :

Ignite 2021 ini, saya tertarik dengan salah satu presentasi dan demo mengenai Power Apps.

Demo dari perusahaan itu dan ide mereka dimana membuat aplikasi-aplikasi kecil dalam membantu digitilasi dan automisasi perusahaan. Pada saat itu mereka juga menjelaskan berapa antusiasnya para user-user mereka menggunakan aplikasi itu dan mulai mengembangkan aplikasi-aplikasi lainnya untuk kebutuhan lainnya.

Dari situ saya berpikir, wah saya juga ingin mencoba membuatnya dan berpikir bahwa aplikasi yang cepat siap saji, cepat dalam development dan implementasi itu akan mempercepat produktifitas perusahaan itu sendiri.

Kebetulan saya juga akan membawakan webinar untuk tanggal 25 Maret 2021 mengenai “Leverage your Work from Home Level Now” maka mencoba membuatkan aplikasi menggunakan Power Apps tersebut, dengan data berita perusahan saya ambil dari internal Sharepoint Online dan penyimpanan data absen itu menggunakan excel file yang di taruh di onedrive microsoft office 365.

Berikut requirementnya:

Sebagai HRD dan Company

  • Applikasi yang mudah dibuat, tidak perlu ribet development dan implementasi
  • Dapat dengan mudah melakukan rekap Absen karena menggunakan Excel File
  • Bisa sharing kegiatan-kegiatan perusahan yang bisa dibaca oleh Karyawan
  • Aman, hanya bisa diakses oleh karyawan
  • Dapat memberikan Posisi (GPS) dimana karyawan berada. Itu dapat membantu bila ada informasi terkait Covid 19 di wilayah tersebut

Sebagi User / Employee

  • Dapat diakses mudah dan aman
  • UI yang simple dan informatif
  • Karyawan bisa mudah melihat history absen yang sudah di submit.
  • Karyawan bisa mengetahui berita seputar perusahaan.

Okay 🙂 Setelah requirement diatas maka mari kita persiapkan dan buat aplikasinya :

  1. Siapkan data source Events untuk berita-berita perusahan. Disini kita cukup menggunakan list dengan tipe calendar

2. Siapkan data source Excel, yang nantinya data absen akan disimpan kesana

Sheet1

Sheet 2, master data pada drop down menu nantinya.

3. Mari buat dengan Power Apps setelah point 1 dan 2 diatas sudah kita siapkan

Total screen yang akan kita buat adalah 6 (enam), yaitu :

  • Home Screen

Submit Attendance Click ini akan Navigate(EditAttendanceScreen,ScreenTransition.None)

My Attendance List ini akan Navigate(MyAttendanceScreen, ScreenTransition.Fade)

Company News ini akan Navigate(NewsListScreen, ScreenTransition.Fade)

  • NewsListScreen

  • MyAttendanceScreen
  • DetailAttendanceScreen
  • EditAttendanceScreen

Power Apps form sudah selesai, maka siap di publish dan share ke user-user yang di organisasi anda yang akan menggunakan aplikasi ini.

—————————————

English Version :

Past month ago, there was a Microsoft Event Ignite 2021. I was interested with one presentation and demo about the Power Apps.

That demo was about from one of company at Indonesia. They are have briliant idea to build small and simple application – application for their company to help the compay on digitalisation and automitation process. At that presentation, the speaker was also explained how enthusiastic their employee to use the application and began to develop other applications for other needed.

After that presentation i was thought, wow i am also want to try to build the application like that. The application with fast to serve, fast in to development and also implementation it will accelarate the productivity of the company itself.

25 March 2021, i am also have a webinar with title “Leverage your Work from Home Level Now” and i am also as speaker there. For that event, i tried to make an application on top Power Apps. With company news data that I take from internal Sharepoint Online and store the absent data using excel file that is placed on OneDrive Microsoft Office 365.

Here are the requirement detail as follow :

As Human Resources Department and Company

  • The application has to be simple. Easy to development and Implementation
  • Easy to do absense recap, because it using Excel Files.
  • Able to share company events to employee
  • Safe and Secure and only able to access by its employee.
  • Can provide Position (GPS) where employees are located. That can help if there is information related to Covid 19 in the area

As Employee

  • Application it is easy to used and secure
  • User Interface simpel and informatif
  • Employee with ease to see their absence history was submitted
  • Employee can read information about event around company.

Okay 🙂 Regarding the requirement above then we can prepare and start to build the application :

  1. Prepare the Events data source for company news. Here we just need to use a list with a calendar type

2. Prepare the Excel data source, which later the timesheet will be saved there

Sheet 1

Sheet 2

3. Let’s cook it after point 1 and 2 above we have prepared

The total screen that we will create is 6 (six), namely:

  • EditAttendanceScreen

All complete, so it’s ready to publish and share with users in your organization who will use this application.

Happy Sharepoint-ing